Google stopped a part of its AI that could make pictures of people because it was making mistakes about some historical people, like the ones who started America or the bad guys in World War II. They want to fix this and start it again soon. Read from source...
1. The article inaccurately states that "Gemini AI was found to be generating incorrect images of historical figures". This is a misleading way to describe the issue, as it implies that there was a deliberate intention to create false or inaccurate representations, rather than a result of limitations and errors in the AI system.
2. The article repeatedly uses terms such as "inaccuracies", "mistakes", "errors", and "missing the mark" to describe Gemini's performance, which imply a negative judgment on the quality and reliability of the AI tool. These words create a sense of doubt and uncertainty about the value and potential of Gemini, without acknowledging its innovative nature or the challenges it faces as a new technology.
3. The article emphasizes the controversy surrounding Gemini's image generation feature, but does not provide enough context or background information on how the AI works, what are its goals, and what are its strengths and weaknesses. This makes it difficult for readers to understand the purpose and scope of Gemini, and why some of the generated images may be unexpected or undesirable.
4. The article focuses on the negative reactions and feedback from users who encountered problems with Gemini's image generation feature, but does not mention any positive or constructive responses from other users who enjoyed or benefited from the AI tool. This creates a one-sided and biased perspective on Gemini, which may overlook its potential applications and benefits for various domains and audiences.
5. The article compares Gemini to other AI tools such as OpenAI's ChatGPT and Microsoft’s Copilot, but does not provide any clear or objective criteria for evaluating their performance or quality. This makes it unclear why Gemini is inferior or superior to these alternatives, and how they differ in terms of functionality, usability, and ethical implications.
6. The article ends with a statement that Google is "working to address recent issues" with Gemini, which implies that the AI tool is faulty and problematic, and needs to be fixed or improved. This suggests that Google has given up on Gemini, and does not have confidence in its future development or success.
Negative
Analysis: The article discusses a controversy and issue with Google's Gemini AI, which has led to the suspension of its image generation feature for people. This is a negative sentiment because it highlights problems and limitations with the technology, as well as potential backlash from users and critics.