A company called Midjourney made a computer program that can create pictures of anything. But they used many famous artists' paintings to teach their program without asking permission. This made the artists very upset, and some of them decided to mess with the program by putting wrong pictures into it. People also found out that the program was using bad pictures of children, which is not good either. Now everyone is talking about this problem and how AI programs should be more careful with what they learn from. Read from source...
- The article starts with a sensationalized headline that implies a scandal or controversy, but does not provide any evidence of actual harm or wrongdoing by Midjourney. This is a classic clickbait technique to attract attention and generate interest.
- The article uses vague terms like "sparking outcry" and "widespread backlash" without specifying who is reacting or how. It also does not provide any sources or quotes from the artists or other stakeholders involved in the issue, making it seem like an opinion piece rather than a factual report.
- The article jumps from one topic to another, first introducing Midjourney and its database leak, then mentioning data poisoning as a retaliation method, then shifting to a study by Stanford Internet Observatory on explicit images of children used for AI training. This creates confusion and dilutes the main focus of the story, which should be on Midjourney's case.
- The article ends with a disclaimer that says "This image was generated using artificial intelligence", which is irrelevant and misleading in this context. It implies that the images in the article are not real or authentic, but rather computer-generated, which undermines the credibility of the journalism and the issue at hand.