On Wednesday, Binance launched a beta version of Bicasso, a new generative art platform whose images can be minted as NFTs on Binance’s native BNB Chain.
The pilot version of the platform got its name from the combination of “Binance” and “Picasso” and has a limit of 10,000 free mints.
"Bicasso first pilot just completed with 10K NFT minted in 2.5 hours. The AI was a little stressed out, but caught its breath now," Binance CEO Changpeng Zhao, aka “CZ,” shared in his tweet on the same day. While the beta version of the platform only allowed users to mint just one NFT per person, the crypto community seemed really excited about the new generative art tool. Many NFT enthusiasts took off to Twitter to share their Bicasso-generated artworks, while others expressed regrets about being too late to give it a try.
Currently, the beta version is closed, but users can join the waitlist.
Bicasso is similar to other popular generative art platforms like Deep AI, Midjourney, and DALL-E. Binance's art generator works with both images and text prompts. It can modify photos with a professional artistic touch or build masterpieces entirely from scratch based on textual descriptions.
"You can try to enter multiple prompts in English directly or write a description of the uploaded picture, remember words are magic," reads the official webpage.
So far, the system accepts landscapes as well as people and animal portraits as visual input. Bicasso can be used with quite large images, with a maximum limit of 50 MB.
Can artificial intelligence be sued?
While Bicasso's users are waiting for the release of the full-fetched version of the platform, which will allow them to create beautiful NFTs without any artistic skills, generative art tools based on AI are attracting more and more criticism.
Although the artistic community initially hoped that AI-generated images would lack the emotional depth and personal touch, it is now clear that some AI models have already reached a level that makes it impossible to distinguish generative art from that of a human artist. Yet, many creators are even more concerned about AI algorithms being trained on copyright-protected data, which often includes their artworks.
On January 13, lawyer and programmer Matthew Butterick posted about filing a class-action lawsuit against Midjourney, Stability AI, and DeviantArt together with the artists Kelly McKernan, Sarah Andersen, and Karla Ortiz. The reason for the lawsuit was the use of the collage tool Stable Diffusion, which is trained on copyrighted works without any consent from its creators.
"We [Butterick and Travis Manfredi, Joseph Saveri and Cadio Zirpoli, class-action litigators of the Joseph Saveri Law Firm] heard from people all over the world — especially writers, artists, programmers, and other creators — who are concerned about AI systems being trained on vast amounts of copyright work with no consent, no credit, and no compensation. Today, we’re taking another step toward making AI fair & ethical for everyone," Butterick shared in his post on Stable Diffusion litigation.
Stable Diffusion, released by Stability AI in August 2022, is based on a diffusion algorithm invented by Stanford University researchers in 2015. During the training sessions, the AI model behind Stable Diffusion copies images without the creators' permission and stores the compressed copies to recombine them for generating new images.
"These resulting images may or may not outwardly resemble the training images. Nevertheless, they are derived from copies of the training images, and compete with them in the marketplace. At minimum, Stable Diffusion’s ability to flood the market with an essentially unlimited number of infringing images will inflict permanent damage on the market for art and artists," warned Butterick, calling the AI model a "parasite."
The team behind Stability AI has made an attempt to protect artists without removing its software from the market. Its Spawning project, which aims to provide artists with tools for managing their art used in training models, released the “Have I been trained?” app that allows artists to opt out of the inclusion of their artworks into the Stability Diffusion 3.0 dataset.
However, many artists still deem it unfair that their works were included in the dataset without their consent in the first place. Now, they can prevent the model from using their works if they manage to opt out before March 3.
For example, artist and illustrator Katria Raden, believes that AI models should be fully licensed or "trained on copyright-free material and consensual data" from the start.
Jon Oringer, founder and CEO of Shutterstock, a popular image and video stock, expressed his concerns about the practical implementation of the opt-out option in his 26 February Twitter response to Have I been trained? developers:
"At Shutterstock, we would like to opt out all of our contributor images from SD3. The URL and API you provide only allows this to happen one image at a time. Do you expect us to create 600mm api/web calls in less than a week?"
"Opt out as a default is a lazy, sloppy, unethical policy. Opt in is what you need. Consent first, please," philosopher, scientist and Twitter influencer Grady Booch commented on the discussion.
Unfortunately, the legal side of training AI models with copyright data remains rather vague. Mathew Dryhurst, the Technology researcher at Spawning, said in his Twitter post:
"Copyright, my personal position is it is unclear (for better or worse) if it will be useful or defensible. I've spoken to quite a few progressive lawyers who are also uncertain on that. That is out of our hands. We have a lot to build but we build on that assumption."