If you reach a point where progress has exceeded the possibility of making the system safe, would you take a break?
I don’t think today’s systems are placing any kind of existential risk, so it is still theoretical. Geopolitical questions could actually end up being more complicated. But given enough time and enough care and caring and using the scientific method …
If the time is close as you say, we don’t have much time for care and weighting.
Us of which have a lot of time. We are increasingly putting safety resources and things like cyber and also research, you know, control and understanding of these systems, sometimes called mechanical interpretability. And then at the same time, we must also have social debates on the institutional building. How do we want governance to work? How do we get an international agreement, at least on some basic principles on how these systems are used, distributed and even built?
How much do you think the IA will change or eliminate people’s jobs?
What generally tends to happen are new jobs that use new tools or technologies and are recently better. We will see if this time it is different, but for the next few years, we will have these tools included that enhance our productivity and in reality they almost make a little superhuman.
If the ACT can do everything that humans can do, it would seem that it can also do the new jobs.
There is a lot of what we won the girl with a car. A doctor could be helped by an artificial intelligence tool, or you may also have a type of AI. But you want to want a robot nurse: something is something about the appearance of the human empathy of that cure that is particularly humanistic.
Tell me what you send when you look at our future in 20 years and, according to your prediction, is AG everywhere?
If everything is fine, then we should be in an era of radical abundance, a kind of golden era. Aga can solve those I call radical knot problems in the terrible differences in the world, much healthier and more long -life, finding new energy sources. If all this happens, then it should be an era of maximum human prosperity, where we travel to the stars and colonize the galaxy. I think it will begin to happen in 2030.
I am skeptical. We have an incredible abunance in the western world, but we don’t distribute it fairly. As for the resolution of great problems, we do not need answers as much as solving. We do not make an act to tell us how to correct climate change: we know how. But we do it.
I agree with this. We have been, as a species, a society, not good at collaborating. Our natural habitats are destroyed and its party because it would require people to make sacrifices and people you want. But this radical abunrance of ai will make a game different different from zero-sommas-
Would AG change human behavior?
Yes. Let me give you a very simple example. Access to water will be a huge problem, but we have a solution: desalinization. It costs a lot of energy, but if there was renewable, free and clean energy [because AI came up with it] From the merger, then suddenly solve the problem of access to water. Suddenly it is no longer a zero sum game.