As long as you live under a rock or abstinence from social media and internet pop culture, you must have heard of at least the trend, if thousands of images have not been seen flooding in popular social platforms. In the last few weeks, millions of individuals have used Openai’s Artificial Intelligence (AI) chatbot to convert their images into the art of studio-style. Hyo Miazaki’s films eccentric, hand -prepared aesthetics, individual photographs, memes and ability to change the historical views, such as Spiritated Away and my neighboring Totro, millions of people have tried their hands on it.
This trend resulted in a huge increase in popularity for AI chatboats of Openai. However, while the person is happily feeding the chatbot images of his family and friends, experts have raised privacy and data safety concerns over the viral -shattered trend. These are not even a trivial concern. By depositing their images that have been highlighted on experts, users are potentially allowing the company to train their AI models on these images.
Additionally, a distant nefarious problem is that their face data can be part of the Internet forever, causing permanent loss of privacy. In the hands of bad actors, this data can also steal identity such as cybercrime. Therefore, now that the dust has frozen, let us break the deep implications of opening of openi, which has seen global participation.
Origin and rise
Openai introduced the native image generation feature in Chatgpt in the last week of March. Powered by the new capabilities added in the GPT-4O Artificial Intelligence (AI) model, this feature was first released for users paid for platforms, and a week later, it was also expanded to those on free tier. While the Chatgpt Dall-E model can generate images through the model, the GPT-4O model brought better abilities, such as adding an image as an input, better lesson rendering, and high accelerated adhering to inline editing.
The initial adoptors of features began to use quickly, and the ability to add images in the form of inputs became a popular because it is much more fun to see your photos to be converted into artwork, which would be converted into artwork than creating generic images using text signals. Although it is incredibly difficult to find out the true promoter of the trend, software engineer and AI enthusiastic Grant Slaton is credited as popular.
His post, where he, has converted an image of his, his wife and his family’s dog into a beauty-western-style art, saw it more than 52 million times at the time of writing, 16,000 bookmarks and 5,900 reposts.
Although GHIBLI-style images are not available on the total number of users, the above indicators, X (East are known as Twitter), with wide sharing of these images on social media platforms such as Facebook, Instagram and Reddit, suggests that participation may occur in millions.
This trend was beyond individual users, with brands and even government institutions, such as the Government of India’s Mygovindia X account, gibbi-inspired visuals. Celebrities like Sachin Tendulkar, Amitabh Bachchan were also seen sharing these images on social media.
Behind the trend surrounded by privacy and data security concerns
According to its support pages, Openai collects user content, including lessons, pictures and file uploads to train its AI model. An opt-out method is available on the platform, which is active that will refuse the company to collect the user’s data. However, the company does not clearly tell users about this option that it collects data to train the AI model when they are first registered and accessing the platform (this is part of the terms of the use of chatgpt, but most users do not read it. The “clear” refers to a pop-up page that explains the data and the opt-out mechanisms.
This means that most of the general users, including, are sharing their images to generate the art of the ghibali-style, they have no knowledge about privacy control, and they default form their data with the AI firm by default. So, what does this data actually happen?
According to Openai’s support page, as long as a user manually removes chat, the data is stored forever on its server. Even after removing the data, permanent deletion from your server may take up to 30 days. However, when the user data is shared with Openai, the company can use data to train its AI model (not apply to teams, enterprises or education plans).
“When any AI model is pre-educated on any information, it becomes part of the model’s parameters. Even if a company removes user data from its storage system, it is extremely difficult to reverse the training process. While it is unlikely to revive input data, as companies have added the declacifier, the AI model is definitely advantage of the data.”
But, what is the disadvantage – something can ask. The disadvantage here in Openai or any other AI platform is to collect the user data without clear consent that users do not know and how it is used, there is no control over it.
“Once a photo is uploaded, it is not always clear what the platform does with it.
Mukherjee also stated that in the rare event of data breech, where user data is stolen by bad actors, the results can be serious. With the rise of Deepfac, bad actors can misuse data to create fake materials that damage individuals’ reputation or even identity frauds such as fraud.
Results can be long lasting
A data violation is a rare possibility that a case can be made for optimistic readers. However, those individuals are not considering the problem of durability coming with facial characteristics.
Cloudsake researcher Gagan Aggarwal said, “Unlike personal identityable information (PII) or card details, all of which can be replaced/changed, facial characteristics are permanently left as a digital footprint, left a permanent loss to privacy.”
This means that even though a data breech occurs after 20 years, whose images are leaked, they will still have to face security risks. Aggarwal said that today, such open-sources intelligence (Osint) tools are present which can search internet-wide face. If the dataset comes in the wrong hands, it can pose a big risk for millions of people who participated in the scratch trend.
But the problem is only going to increase more people to share their data with cloud-based models and technologies. In recent times, we have noticed that Google has introduced its VEO 3 video generation model, which can not only create hypelistic videos of people, but can also include dialogue and background sounds. The model supports the image-based video generation, which may soon lead to another similar trend.
The idea here is not to create fear or wreath, but to generate awareness about those risks when users feel that when they appear to participate in innocent internet trends or share data with a cloud-based AI model. Knowledge of the same will enable people to make a well informed option in the future.
As Mukherjee explains, “Users should not trade their privacy for digital fun. Transparency, control and safety need to be a part of experience from the beginning.”
This technique is still in its newborn stage, and after emerging as new abilities, more trends are sure to appear. The requirement of the hour is to keep in mind because users interact with such devices. The old saying about the fire also applies to AI: it is a good servant but a bad master.