UK AI Security Summit Preview: Why Now, Why UK

巴比特_

Words: Ingrid Lunden

Source: TechCrunch

Image source: Generated by Unbounded AI

The promise and harms of AI are hot topics these days. Some people say that artificial intelligence will save us, help diagnose some malignant diseases, bridge the digital divide in education, etc. But there are also concerns about the threat it poses to war, security, misinformation, and more. It became a pastime for ordinary people and sounded the alarm in the business world.

AI is very useful, but it can’t yet silence the noise of a room full of chatter. This week, academics, regulators, government leaders, start-ups, big tech companies, and dozens of for-profit and non-profit organisations gather in the UK to discuss and debate AI.

Why UK? Why now? **

On Wednesday and Thursday, the UK will host the AI Security Summit at Bletchley Park, the first of its kind in the UK.

The summit was planned for several months to explore some of the long-term issues and risks posed by AI. The goals of the summit are ideal, rather than concrete: “a common understanding of the risks posed by frontier AI and the need for action”, “a forward-looking process for international cooperation on frontier AI security, including how best to support national and international frameworks”, “organizations should take appropriate measures to improve the security of frontier AI”, and so on.

This high-level desire is also reflected in the identities of the participants: senior government officials, industry leaders and prominent thinkers in the field will be present. (According to the latest reports: Elon Musk, President Biden, Justin Trudeau and Olaf Scholz, among others, will be in attendance).

The summit sounded special, and it was: the “golden ticket” to the summit (described by London-based tech founder and author Azeem Azhar) was in short supply. It is reported that the summit will be small-scale and mostly closed. These events include talks at the Royal Society (National Academy of Sciences); Large-scale “AI Fringe” conferences in multiple cities and announcements from many task forces, among others.

Gina Neff, executive director of the Mindelow Centre for Technology and Democracy at the University of Cambridge, said at an evening panel discussion on science and security at the Royal Society last week: "We’re going to play the role of the summit we’ve already handled. In other words, an event in Bletchley will do what it is supposed to do, and what is not within the scope of the event will be an opportunity for people to brainstorm other issues. ”

Neff’s panel discussion was a case in point: in a packed hall of the Royal Society, she sat with representatives of Human Rights Watch, state officials from the large trade union Unite, the founder of the Tech Global Institute, a think tank focused on tech equity issues in the Global South, the head of public policy at startup Stability AI, and computer scientists at the University of Cambridge.

At the same time, the so-called AI edge meeting can be said to be only a nominal “edge”. With the Bletchley Summit taking place at the same location during the week, with a very limited guest list and equally limited opportunities to learn what was discussed, the AI Edge session was quickly expanded to Bletchley and fleshed out the conference agenda. The event was reportedly not organized by the government, but by a PR firm called Milltown Partners (which has represented companies such as DeepMind, Stripe, and venture capital firm Atomico), and interestingly, it lasted a full week, was held in multiple locations across the country, and was free for those who could get tickets (many of the events were sold out), and many of the sessions were also available on streaming services.

Despite the variety, it was very sad to see that the discussion on AI, despite its infancy, was still so divided: one was a meeting of the powers of power (most of which was open to invited guests only) and another was a meeting of the rest of us.

Earlier today, a group of 100 trade unions and activists sent a letter to the prime minister saying the government was “squeezing” their voices in the conversation by not allowing them to participate in Bletchley Park. (They may not have gotten the tickets, but their way of opposing them was absolutely sensible: the group made it public by sharing the letter with the country’s most elite economic publications, such as the Financial Times).

It’s not just ordinary people who are being left out in the cold. Carissa Véliz, a lecturer in the Department of Philosophy at the University of Oxford, said at today’s AI Edge event: "None of the people I know have been invited. ”

Some believe that there are benefits to streamlining.

Artificial intelligence research scientist Marius Hobbhahn is the co-founder and principal of Apollo Research, which is developing AI safety tools. He believes that a smaller number of people can also attract more attention: “The more people in the room, the harder it is to draw any conclusions or have an effective discussion,” he said. "

In a broader sense, the summit is just a “brick” and part of a broader conversation that is currently underway. Last week, British Prime Minister Rishi Sunak said he intended to set up a new AI safety institute and a research network in the UK to spend more time and effort studying the impact of AI; A group of prominent academics, led by Yoshua Bengio and Geoffrey Hinton, collectively plunged into this field with a paper titled “Managing AI Risk in an Era of Rapid Progress”; The United Nations has also announced the creation of its own task force to explore the impact of artificial intelligence. Recently, U.S. President Joe Biden also issued the U.S. own executive order to set AI safety standards.

“Existential Risk”

One of the biggest debates is whether the idea that AI poses an “existential risk” is exaggerated, or even intentional, to remove scrutiny of more direct AI activity.

Matt Kelly, professor of systems mathematics at the University of Cambridge, points out that one of the most frequently cited areas is misinformation.

“Misinformation is not new. It’s not even something new in this century or the last century,” he said in an interview last week. “But this is one of the areas where we think there are potential risks for AI in the short and medium term. And these risks develop slowly over time. Kelly, a fellow of the Royal Society of Scientry, said the society also conducted a red-and-blue team exercise specifically for misinformation in science during the summit run-up to see how large language models would behave when trying to compete with each other. It’s an attempt to better understand what the risks are right now.”

The UK government seems to be taking a two-faced approach to this debate, with the danger more evident than the name of the event it is organizing, the AI Security Summit.

In his speech last week, Sunak said: “Right now, we don’t have a common understanding of the risks we face. “Without this consensus, we can’t expect to address these risks together.” That is why we will be making a strong push for the first international statement on the nature of these risks.”

But in setting up the summit, the UK first positioned itself as a central player in setting the agenda of “what we talk about when we talk about AI”, and it certainly has an economic perspective.

“By making the UK a global leader in secure AI, we will attract more new jobs and investment from this wave of new technologies,” Sunak noted. (The memo was also received by other departments: the Home Secretary today hosted an event with the Internet Watch Foundation and a number of major consumer app companies such as TikTok and Snap to tackle the proliferation of AI-generated images of sexual abuse).

Involving Big Tech may seem to help in one way, but critics tend to see it as a problem as well. “Regulatory capture”, where larger power players in the industry take proactive steps to discuss and develop risks and protections, has been another big theme in the brave new world of AI, and this week’s summit is no different.

“Be wary of AI leaders who raise their hands and say, ‘me, me.’” Nigel Toon, CEO of AI chipmaker Graphcore, astutely pointed out in an article he wrote about this week’s upcoming summit: “Governments are likely to step in and take their word for it.” (He’s not entirely marginal, though: he’ll be attending the summit himself.)

At the same time, many people are still debating whether the current so-called existential risk is a useful exercise in thinking.

“I think the rhetoric of frontiers and artificial intelligence has put us in a state of fear of technology over the past year,” Ben Brooks, head of public policy at Stability AI, said at a panel discussion at the Royal Society, citing the “Paperclip Maxing” thought experiment — where AI could destroy the world by creating paperclips without regard for human needs or safety — as an example of this deliberately limiting approach. "They don’t think about the circumstances under which AI can be deployed. But you can develop it safely. We want everyone to be inspired by this and realize that AI is achievable and safe to do. ”

Others are less certain.

Hobbhahn of Apollo Research said, "To be fair, I don’t think existential risk is accurate. “Let’s call it catastrophic risk.” Given the pace of development in recent years, generative AI applications have brought large language models into the mainstream, and he believes that the biggest concern will remain the bad actors who use AI, rather than the AI itself in a riot: using it for biological warfare, the national security situation, and misinformation that could alter the democratic process. All of these, he said, are areas where he believes AI is likely to play a catastrophic role.

“Turing Award winners are openly concerned about survival and catastrophic risks… We really should think about it,” he added.

Business Prospects

While there are serious risks, the UK also wants to make the country a natural home for AI companies by hosting large conversations about AI. However, some analysts believe that the path to investing in AI may not be as smooth as some predict.

“I think the reality is starting to emerge that businesses are starting to understand how much time and money they need to allocate to generative AI projects in order to get reliable outputs that can really increase productivity and revenue,” said Avivah Litan, corporate vice president analyst at Gartner. "Even though they make iterative adjustments and engineering on the project, they still need manual oversight of operations and outputs. In short, GenAI’s output is not reliable enough and requires a lot of resources to make it reliable. Of course, the model is improving all the time, but that’s the current state of the market. Still, at the same time, we do see more and more projects entering the production stage. ”

She believes that AI investments “will definitely slow down the growth of businesses and government organizations using AI.” Vendors are pushing their AI applications and products, but businesses can’t adopt them as quickly as they are pushed. In addition, there are many risks associated with GenAI applications, such as democratizing easy access to confidential information even within an organization. “

Just as “digital transformation” is more of a slow-burning concept in reality, an enterprise’s AI investment strategy also needs more time. "It takes time for businesses to lock down their structured and unstructured data sets and set permissions correctly and efficiently. There’s too much oversharing in the enterprise that didn’t really matter until then. Litan adds, “Now, anyone can access any file that others are not adequately protected using a simple native language command, such as English.” “

The fact that how to balance the commercial interests of AI with the security and risk issues that Bletchley Park will discuss speaks volumes about the task ahead, but also highlights the tension of the situation. Later in the conference, Bletchley’s organisers have reportedly worked to expand the discussion beyond high-level security concerns to areas where risks may actually arise, such as healthcare, although this shift is not detailed in the agenda currently announced.

“There will be a round table of about 100 experts, which is not small. I’m a critic, but it doesn’t sound like a bad idea,” says Neff, a professor at the University of Cambridge. "Now, will global regulation be a topic of discussion? Absolutely not. Are we going to normalize East-West relations? Probably not. But we’re coming up with our summit. I think there could be some very interesting opportunities at this point. ”

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments