Friday , 12 July 2024
Home Tech Misinformation, errors, and the Pope in a puffer: what quickly developing AI can – and can’t – do
Tech

Misinformation, errors, and the Pope in a puffer: what quickly developing AI can – and can’t – do

Photo by DeepMind on Unsplash

Recent advances in artificial intelligence have yielded warnings that the rapidly developing technology may result in “ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control”.

That’s according to an open letter signed by more than 1,000 AI experts, researchers and backers, which calls for an immediate pause on the creation of “giant” AIs for six months so that safety protocols can be developed to mitigate their dangers.

But what is the technology currently capable of doing?

Midjourney creates images from text descriptions. It has improved significantly in recent iterations, with version five capable of producing photorealistic images.

Incredible to see so much Gen AI progress in one year

Credit: https://t.co/wyQIILjPss pic.twitter.com/fJwJq492Tc

Midjourney v5 has pushed into photorealism, a goal which has eluded the computer graphics industry for decades (!) 🤯

Insane progression, and all that by 11 people with a shared dream.

🧵 Let’s explore what these breakthrough in Generative AI mean for 3D & VFX as we know it… pic.twitter.com/GlycHcPQqA

These include the faked images of Trump being arrested, which were created by Eliot Higgins, founder of the Bellingcat investigative journalism network.

Making pictures of Trump getting arrested while waiting for Trump’s arrest. pic.twitter.com/4D2QQfUpLZ

Midjourney was also used to generate the viral image of Pope Francis in a Balenciaga puffer jacket, which has been described by web culture writer Ryan Broderick as “the first real mass-level AI misinformation case”. (The creator of the image has said he came up with the idea after taking magic mushrooms.)

AI-generated image of Pope Francis goes viral on social media. pic.twitter.com/ebfLK4F850

Image generators have raised serious ethical concerns around artistic ownership and copyright, with evidence that some AI programs have being trained on millions of online images without permission or payment, leading to class action lawsuits.

Tools have been developed to protect artistic works from being used by AI, such as Glaze, which uses a cloaking technique that prevents an image generator from accurately being able to replicate the style in an artwork.

AI-generated voices can be trained to sound like specific people, with enough accuracy that it fooled a voice identification system used by the Australian government, a Guardian Australia investigation revealed.

In Latin America, voice actors have reported losing work because they have been replaced by AI dubbing software. “An increasingly popular option for voice actors is to take up poorly paid recording gigs at AI voiceover companies, training the very technology that aims to supplant them,” a Rest of World report found.

AI voice cloning is getting shockingly good.

This video by ElevenLabs uses Leonardo DiCaprio’s famous climate change speech and turns it into other cloned actors’ voices.

You can even clone your own voice on their website. pic.twitter.com/L38vAvcU7Z

GPT-4, the most powerful model released by OpenAI, can code in every computer programming language and write essays and books. Large language models have led to a boom in AI-written ebooks for sale on Amazon. Some media outlets, such as CNET, have reportedly used AI to write articles.

There are now text-to-video generators available, which, as their name suggests, can turn a text description into a moving image.

“Will Smith eating spaghetti” generated by Modelscope text2video

credit: u/chaindrop from r/StableDiffusion pic.twitter.com/ER3hZC0lJN

Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators

abs: https://t.co/5xCsj4PNRj
github: https://t.co/BdSzlepGQG pic.twitter.com/XY4piH6j4v

AI is also getting better at turning 2D still images into 3D visualisations.

Everyone will be able to be anyone soon.

MegaPortraits by SamsungLabs uses new neural architectures that produce high-quality avatars from medium-resolution videos and high-resolution images.

Deepfakes are getting scary good. pic.twitter.com/tCOljxt60H

3D capture is moving so fast – I scanned & animated this completely on an iPhone.

Last summer you’d need to wrangle COLMAP, Instant NGP, and FFmpeg to make NeRFs.

Now you can do it all inside Luma AI’s mobile app. Capture anything and reframe infinitely in post!

Thread 🧵 pic.twitter.com/hDngpVBas6

After weeks of research and development I finally managed to turn AI generated images into 3d scenes, refine them in real time in a non-destructive and streamlined workflow… It’s beyond camera projection since you can make entire scenes viewable in any angles.. It’s not a… pic.twitter.com/4pfAF9skPZ

AI, particularly large language models that are used for chatbots such as ChatGPT, is notorious for making factual mistakes that are easily missed because they seem reasonably convincing.

For every example of a functional use for AI chatbots, there is seemingly a counter-example of its failure.

Prof Ethan Mollick at the Wharton School of the University of Pennsylvania, for example, tested GPT-4 and was able to provide a fair peer review of a research paper as if it were an economic sociologist.

Not sure how to feel about this as an academic: I put one of my old papers into GPT-4 (broken into into 2 parts) and asked for a harsh but fair peer review from a economic sociologist.

It created a completely reasonable peer review that hit many of the points my reviewers raised pic.twitter.com/VTVwkB8ubL

However, Robin Bauwens, an assistant professor at Tilburg University in the Netherlands, had an academic paper rejected by a reviewer, who had likely used AI as the reviewer suggested he familiarise himself with academic papers that had been made up.

A reviewer rejected my paper, and instead suggested me to familiarize myself with the following readings. I could not find them anywhere. After a control in GPT-2, my fears where confirmed. Those sources where 99% fake…generated by AI. https://t.co/ynx2igLObW

The question of why AI generates fake academic papers relates to how large language models work: they are probabilistic, in that they map the probability over sequences of words. As Dr David Smerdon of the University of Queensland puts it: “Given the start of a sentence, it will try to guess the most likely words to come next.”

Why does chatGPT make up fake academic papers?

By now, we know that the chatbot notoriously invents fake academic references. E.g. its answer to the most cited economics paper is completely made-up (see image).

But why? And how does it make them? A THREAD (1/n) 🧵 pic.twitter.com/kyWuc915ZJ

In February, Bing launched a pre-recorded demo of its AI. As the software engineer Dmitri Brereton has pointed out, the AI was asked to generate a five-day itinerary for Mexico City. Of five descriptions of suggested nightlife options, four were inaccurate, Brereton found. In summarising the figures from a financial report, Brereton found, it also managed to fudge the numbers badly.

ChatGPT has been used to write crochet patterns, resulting in hilariously cursed results.

GPT-4, the latest iteration of the AI behind the chatbot, can also provide recipe suggestions based on a photograph of the contents of your fridge. I tried this with several images from the Fridge Detective subreddit, but not once did it return any recipe suggestions containing ingredients that were actually in the fridge pictures.

“Advances in AI will enable the creation of a personal agent,” Bill Gates wrote this week. “Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with.”

“This will both improve your work on the tasks you want to do and free you from the ones you don’t want to do.”

For years, Google Assistant’s AI has been able to make reservations at restaurants via phone calls.

OpenAI has now enabled plugins for GPT-4, enabling it to look up data on the web and to order groceries.

2/ Collaborations with major companies

Here’s an example of meal planning for your weekend:

• Restaurant recommendation for Saturday (OpenTable)
• Recipe for Sunday (ChatGPT)
• Calculate calories (WolframAlpha)
• Order the ingredients (Instacart) pic.twitter.com/qz01ch8fh3

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Tech

How the Apple AirTag became a gift to stalkers

Photo by Wijdan Mq on Unsplash They emptied the glove compartment, opened...

Tech

Following Microsoft’s veto of its takeover bid, Activision Blizzard declares the UK \”closed for business.\”.

Photo by Taylor Grote on Unsplash The Call of Duty developer Activision...

Tech

Review of the Samsung Galaxy Book 3 Ultra: The creative force behind the gorgeous screen

Photo by Ben White on Unsplash The top of Samsung’s new 2023...

Tech

Review of the Garmin Forerunner 955: the best running watch for competitive triathletes

Photo by Capstone Events on Unsplash Garmin’s new Forerunner 955 multisport watch...