Artificial intelligence has been discussed for many years, but today we are witnessing «spectacular progress at an incredible speed», in medicine, life sciences, physics, surprising even experts like James Manyika, 58 years old, originally from Zimbabwe, with a PhD in computer science, AI, and robotics obtained 25 years ago from Oxford University. Today, Manyika is Google’s first Senior Vice President for Technology and Society and reports directly to CEO Sundar Pichai. His mandate includes overseeing Google Research and Google Labs. In other words, he is the man behind Google’s most ambitious innovations in AI, computing, and science in a responsible manner. He also co-chairs the United Nations AI Advisory Board, created to help the international community govern artificial intelligence. In this interview, his first with an Italian newspaper, Manyika discusses the formidable developments that AI will bring to humanity, from health to combating climate change, enabling us, for example, to create new materials, including those for building electric batteries, or to predict floods seven days in advance. Without hiding the risks of this new technology, which «must be regulated and governed globally.»
Can you give us an idea of what to expect from AI in the near future?
«One of the extraordinary things is that this technology is moving so quickly. But I would like to start with some things we have done recently. Last December, we presented Gemini Ultra, which was the first model in the world to surpass humans in the so-called MMLU (Massive Multitask Language Understanding).»
What do you mean?
«MMLU is an important benchmark because it analyzes the understanding of reasoning, from law to mathematics, across 57 domains. If you looked at this benchmark a year ago, the best open AI systems would score about 35-36%, compared to the level of human expertise, which reaches about 88%. Gemini, with 90%, is the first model to surpass it. I’ll mention three other things that have happened in this field in the last three weeks, which can give a sense of the speed. But first, I need to make a premise.»
Go ahead.
«The first thing is that the model itself has become natively multimodal. This means that the same model now no longer just produces text but also video, images, audio, and in both directions. You can write some text and get an image or vice versa; you input an image and get text. And now we have the so-called ‘long context’: previously, the input was limited, but now, for the first time, we have demonstrated that we can provide context up to 2 million tokens. It’s breathtaking. I can input a 3-hour video as a request, and the model explains what it’s about. We did this, for example, with a medical operation. This not only allows for much more precise results but also enables us to reason with the model and better understand problems. A year ago, this was technically impossible.»
What are the three breakthroughs you mentioned?
«Three weeks ago, we launched a model called Med Gemini, designed specifically for the medical field. In addition to basic tasks, it now understands radiology images, DNA datasets, and similar things, and it is far better than most expert human doctors on 11 of the 15 benchmarks.»
The second one?
«Two weeks ago, we launched Alpha Fold 3. We started Alpha Fold about a year ago: it was the program that solved 58 of biology’s big problems. For example, understanding the structure of proteins. Understanding the process of a single protein typically takes 3 to 4 years of lab work, and before Alpha Fold, biologists had understood the structure of about 110,000 proteins worldwide. A year ago, Alpha Fold accurately predicted the structure of all known proteins, about 200 million. But two weeks ago, we announced that Alpha Fold 3 not only understands protein structures but can also comprehend all molecules, DNA, RNA, etc. In biology, Alpha Fold 3 is the Google search of proteins. And it’s free and available to everyone. Today, 1.8 million biologists in 190 countries use Alpha Fold. Some are working on malaria vaccines and other medicines.»
And the third thing?
«It is the 3D mapping of the synapses of the human brain. There is still a lot of work to be done, but understanding the brain’s structure opens up unprecedented possibilities in neuroscience. This happened just a week ago. The list of progress is very long. Six months ago, our AI program in material sciences discovered 2.2 million new crystals that didn’t exist before. Of these, 380,000 are stable enough to be synthesized.»
Are you saying that new materials can be synthesized to replace rare earth elements?
«There are some companies that are already trying to synthesize certain crystals to produce new materials for battery technology and solar panels. But I could give examples of climate modeling and many other things: the applications of AI to science are simply fantastic. Take flood prediction, for example. So far, meteorologists have been able to provide 2 to 3 days’ notice. Our model achieves a seven-day notice. This is a milestone because it allows enough time to save human lives, which is especially important given the frequency of such events due to climate change. We started with a pilot project a year and a half ago in Bangladesh and then extended it to all of southern India. Today, we use it in 83 countries worldwide, including 28 in Africa, and we are also working in some countries in Europe and some areas of the United States. Predicting a flood a week in advance has a huge impact on society, but the benefit for low-income countries is much more significant.»
Artificial intelligence could reverse the productivity slowdown and drive a new economic revolution. Given the gloomy growth forecasts globally, what will it take to see an impact on the economy and how long will it take?
«The potential for increased productivity is considerable. But for the impact on growth to happen quickly, AI must be adopted by sufficiently large sectors of the economy, such as healthcare, transportation, and so on. So far, AI has mainly been embraced by the tech sector, which, for example, in the US, represents just 4% of the workforce. This sector is highly productive, but it doesn’t matter because it’s too small. Moreover, we must not forget the lesson of the past and risk a new ‘Solow paradox,’ where we see AI everywhere except in the productivity statistics. It will take time and many changes in economic sectors; companies will need to modify processes and structures to benefit from the new technology, and they will need to reorganize the way they work. The workforce will also need to change. So widespread adoption and a profound reorganization are required. All this takes time. But it also depends on what kind of productivity we want.”
What do you mean?
«We need to ask ourselves what we are using artificial intelligence for. If a company adopts AI only to reduce the workforce but not to increase production, it doesn’t impact growth. It’s just becoming more efficient, not necessarily more productive. But a fourth thing is also necessary.»
What is that?
«Policymakers need to ensure that all this happens. In other words, policymakers need to create the right incentives for the use of technology. They also need to encourage complementary investments, including infrastructure. Because without enabling infrastructure, the technology alone doesn’t matter. If these four things were achieved, I think the impact on the economy would be really very fast.»
Many fear that many jobs will be done by AI. We are already witnessing the layoff of many people by tech groups, including Google. Do you foresee large waves of layoffs, at least during the transition phase? Which sectors are most vulnerable? What is the future of work, given that in 2019 you co-chaired the Future of Work Commission in California?
«The layoffs in tech are unrelated. They are largely a correction after the boom experienced during the Covid pandemic when the sector grew tremendously. Now we are returning to normal. But it is true that we need to ask ourselves what the future of work will be. Most research indicates that some jobs will decrease, others will increase, and the majority will change. Probably more jobs will be created than will be lost. This does not mean that some occupations will not decline, but the net result will be positive. However, worker retraining (reskilling) will be crucial. A study by the ILO (International Labour Organization), conducted with data from 150 countries, concludes that generative artificial intelligence will not destroy jobs but will increase them sixfold. An article by Stanford University titled ‘The Turing Trap’ argues that the design of technology matters less than how it is used. I am referring to employers. Therefore, we also need to think about what incentives employers should have. This will make the difference. Today, incentives favor capital over labor, promoting automation with job destruction instead of human augmentation. The good news is that the latest studies show that generative AI benefits less-skilled workers more than in the past. Even those who do not know how to write code can use AI and ask the model to write the code for them.»
What could go wrong? What are the major risks of AI?
«There are risks and complexities. Among the first, I include performance risks. This happens when AI results are not reliable. We know that LLMs sometimes have hallucinations and can give untrue answers. There are many such performance issues; we are still at the beginning of the technology, and we need to solve them because they could cause harm. Additionally, we have outputs biased by prejudice, for example. Another category of risks is what I call improper applications and misuse, i.e., when bad actors use technology for purposes other than those for which it was created, such as for criminal activities and disinformation, which is particularly important in an election year like this, with millions of people going to vote. To combat deepfakes, as Google, we launched a sort of ‘watermark’ that marks everything created with AI, whether text, image, video, or audio. The complexities, on the other hand, concern the impact and changes that AI causes to systems, from education to the labor market to national security.»
One of the problems with AI is that it consumes a lot of energy.
«True. There is a lot of work to do and some is already being done. To improve the efficiency of these models and also to invent new architectures that are not so intensive in terms of calculations. It is a work in progress, but I believe we should also remember that, at the same time, these technologies will help us address climate change and the energy transition. We have seen, for example, that using AI technologies in data centers has allowed us to improve data efficiency by over 40%; we would never have achieved that without it. But I am willing to accept the compromise of spending 1% more energy while saving 10 times or 100 times more of our carbon footprint.»
You co-chair the AI Advisory Board of the United Nations, created to support the international community’s effort to govern artificial intelligence. How is the work progressing? On which aspects is there convergence and on which is there debate within the committee?
«Being composed of 39 members from 33 countries, with personalities from academia, governments, industry, and civil society, the perspectives are very diverse. In summary, I can say that the so-called Global South, which includes Latin America, Africa, and Asia, is generally much more optimistic about AI and its potential to transform their economies by improving access to knowledge and health, compared to Europe and North America. We have reached the first milestone. Five weeks ago, the UN General Assembly unanimously voted on a resolution on AI largely based on our work, which produced an interim report presented last December with a draft of recommendations, receiving broadly positive feedback from member states. We are now contributing to the UN process to prepare the Global Digital Compact, which is hoped to be agreed upon by member states at the summit scheduled for September. Our Board is preparing the final report to be presented in August. Many expected that we would recommend a new agency for AI. However, we have agreed that we do not need one.»
How do you evaluate the AI Act adopted by the European Union?
«When the ideas were still being formulated, I was rather worried. But the final result is better than the starting point, thanks especially to the work of Italy. I liked that they adopted a risk-based approach. I would still have encouraged a bit more innovation. My view, at least regarding regulations, is that they should always do two things. They should address the risks and challenges we do not want, but also enable what we do want, namely innovation and the progress of science and the economy.»
How can Europe catch up in AI, given that the major tech players are in the United States and China?
«It is certainly not due to a lack of talent. But investments are a crucial part of innovation. As with regulation, which, as I have mentioned, should not only address what we don’t want but also enable what we do want. In this case, I hope that Europe will aim to become more innovative and transformative, to compete on the global stage with companies that serve the world and are the equivalent of Google and many others.»
If you had a teenage child, what would you advise them to study?
«It is essential to study STEM subjects, namely science, technology, engineering, and mathematics. But this should be combined with a deep knowledge of the humanities, because one of the things that is becoming important is that many of the questions about the impact and use of these technologies are now interdisciplinary: it is difficult to separate, for example, AI research and development from philosophy and ethics. My son is 22 years old, and I am very pleased that he has earned a double degree in computer science and philosophy. But my final message is this: we must learn how to learn.»
29 maggio 2024
© RIPRODUZIONE RISERVATA