Meta's artificial intelligence that transforms texts into videos

Meta's artificial intelligence that transforms texts into videos

Digital art and artificial intelligences that transform text into images have depopulated on various platforms in recent months. Applications such as Dall-e or Midjourney are able to create digital works starting from a very simple text prompt. Obviously there is no shortage of problems, above all ethical, of this type of technology: from the bias of the algorithm to the possibility that it is exploited to propose violent or discriminatory images.

Meta has introduced a new AI, called Make-a -Video capable of transforming written instructions into high quality video clips. Under the launch tweet, users were able to comment with their prompt and Meta responded with various demonstration examples.

Twitter content This content can also be viewed on the site it originates from.

The videos posted range from funny (like this giant hamburger landing on the sea in front of New York) to mildly disturbing (this humanized sloth typing on a computer). In general, the results are faithful to the instructions given and rather realistic, even if at times disturbing. Make-a-video is also able to transform static images into clips and to modify existing videos.

Meta is also aware of the slippery direction that the diffusion of this technology could take: “To reduce the risk of generate harmful content, we have examined, applied and iterated filters ”reads the presentation of the research. Meta will also keep a watermark on the generated videos to prevent them from being confused with real images. At the moment the technology is not available to the public, but the company plans to distribute it soon. You can subscribe to a mailing list to receive news.

Twitter content This content can also be viewed on the site it originates from.

It is not the first time that artificial intelligence tools are rapidly exploited for less than noble purposes. Just a few months ago, Meta's BlenderBot3 chatbot started ranting anti-Semitic conspiracy theories. In 2016, Tay, a Microsoft chat bot active on Twitter, began spewing racist and misogynistic slurs. This happens both because users, through their interactions, train artificial intelligence to behave in a certain way, and because it is based on vast databases of public information that include content of various kinds. Make-a-video seems to carry the same risks, for now, and the caution of researchers in disseminating the tool to the public appears justified.