Generative AI Gains Ground
(NewsUSA) – Artificial Intelligence (AI) continues to grip the imagination and the headlines, and AI will continue to impact productivity, security, and economics worldwide, but there is a role for smart regulation, according to Andrew Moore, CEO of Lovelace AI and co-chair of the Generative AI Task Force for the non-profit Special Competitive Studies Project (SCSP).
Use of generative AI, which involves machine learning and the ability of a computer to predict the next word (such as ChatGPT), continues to expand globally, but it is still a work in progress, says Moore. “The world is very demanding, and it’s not OK for generative AI to be really impressive 96% of the time and really stupid 4% of the time.”
For example, the potential of self-driving cars has yet to be fully realized despite some impressive demos, Moore says. “That’s so similar to what we’re talking about with generative AI at the moment, where sometimes GPT gives amazing answers that it seems so much smarter than any human, and sometimes it gives crazy answers,” he notes.
Regulation is needed, but businesses have expressed concerns that regulations will be impractical, stifle innovation, and prevent Western countries from competing with the rest of the world, Moore says.
In a recent podcast, Moore emphasized two areas in which AI innovation should be supported:
Education. AI provides unique opportunities not only for children, but for learners of all ages to get absolute full attention in a form of instruction that can diagnose where they are confused and help them understand, Moore explains. “I’m so excited about many results of how the use of AI tutors can help kids or other people who perhaps are not able to get that level of attention from a human teacher, or maybe the human teachers just don’t have the expertise to teach a particular topic,” he adds. Too much regulation, such as a blanket refusal to allow the AI to listen to human voices in the form of students asking questions, would hinder the educational benefits, and countries with such regulation would fall behind in terms of the benefits in learning outcomes for kids, Moore says.
National security. Autonomy is the name of the game when it comes to using AI in national security, says Moore. The United States has careful rules about the use of autonomy in weapons systems. “But at the moment, as we’re seeing around the world, the most important use of autonomous platforms is for gathering data,” he says. For example, “in an area where suddenly a surprising number of ships are gathering, you quickly get an eye on the sky, and make sure that the AI is in the sky and secure and operating even in the face of adversaries.”
AI brings potential risks as well as benefits, but it is essential to find ways to strike a balance with regulation that still promotes innovation, says Moore.
Visit scsp.ai for more information.