Artificial intelligence got creative in 2022, generating stunning text, video, and images from scratch. It is also our top tech prediction for 2023. But in addition to being a source of fascination, it is also a source of fear.
Beyond writing essays and creating images, AI will affect every industry, from banking to healthcare, but it is not without its biases, which can be harmful.
This is how AI can evolve in 2023 and what to keep in mind.
Chatbots and competition
In early 2022, OpenAI released DALL-E 2, a deep learning technology that produces images from written instructions. Then Google and Meta released AI that can produce videos from text prompts.
Just a few weeks ago, OpenAI released ChatGPT 3, which jumped on the scene to produce eloquent, well-researched text at the command of a short text description.
Now the next thing to watch, which could be available in 2023, is of course an upgrade: GPT-4. Like its predecessor, it is rumored that it will be able to translate into other languages, summarize and generate text and answer questions, and include a chatbot.
It will also reportedly have 1 trillion parameters, which means it will produce more accurate answers even faster.
But Elon Musk, one of the early creators of OpenAI, has already criticized ChatGPT for refusing to answer questions about specific topics, such as the environment, because of how it has been programmed.
Another thing to watch in 2023 is how other tech giants will respond to the competition.
Google management issued a “code red” when ChatGPT 3 was released over concerns about how it would affect Google’s search engine, according to the New York Times.
AI in business and tackling the world’s problems
But AI also has the potential to play a role in the fight against climate change, as it can help companies make decisions about sustainability and reduce carbon emissions much more easily.
“This technology can help businesses and governments meet this challenge and make the world an environmentally better place for us,” said Ana Paula Assis, IBM General Manager EMEA.
She told Euronews Next that AI enables faster decision-making, which is especially necessary with an aging population as it “puts a lot of pressure on the skills and capabilities that we may have in the marketplace.”
Assis said that this is why the application of AI for automation has now become “urgent and imperative.”
But AI will not only transform business. It can also help doctors make a diagnosis by pooling data to calculate symptoms.
It can even help you with banking and loans.
Credit Mutuel in France has embraced AI to help its client advisors provide better and faster responses to clients. Meanwhile, NatWest in the UK is helping its clients make better informed decisions about mortgages.
The demand for AI in companies has already increased in 2022 and it seems that it will continue to grow.
IBM research shows that between the first and second quarters of 2022, there was a 259 percent increase in job postings in the AI domain, Assis said.
AI and ethics
As the technology is expected to develop in 2023, so are the deeper questions behind the ethics of AI.
While AI can help reduce the impact of human bias, it can also make the problem worse.
Amazon, for example, stopped using a hiring algorithm after it was found to favor apps that used words like “captured” or “educated,” words that were used more often on male resumes.
Meanwhile, ChatGPT won’t let you write a racist blog post, saying it’s “not capable of generating offensive or harmful content.” But it could if you asked for it in another way that tiptoes through the subject.
This biased or harmful and racist content is possible because the AI is trained on hundreds of billions of words and sources that are taken from websites and social networks.
Another way AI can perpetuate bias is through systems making decisions based on past training data, such as biased human decisions or historical and social inequalities. This may also be due to gaps in the available data, for example facial recognition systems that may have sampled primarily white males.
The responsibility for fairer and more harmless AI, therefore, falls not only on the AI companies that create the tools, but also on the companies that use the technology.
IBM research shows that 74 percent of companies surveyed said they do not yet have all the necessary capabilities to ensure that the data used to train AI systems is not biased.
Another problem is the lack of tools and frameworks that give companies the ability to explain and be transparent about how algorithms work.
“These are really the built-in capabilities that we need for enterprises to realize in order to provide a fairer, safer, more secure use of AI,” Assis said.