Edward Snowden is a person who used to work for the government and he leaked secret information. He thinks that people should be more worried about robots in the military, which can hurt or kill people, than AI chatbots, which are computer programs that talk like humans but sometimes say wrong things. He says that trying to stop AI chatbots from saying certain things is not a good idea and we should focus on making sure robots in the military do not harm anyone. Read from source...
- The article is poorly written, lacks clarity and coherence. It jumps from one topic to another without explaining the connections or providing proper context.
- The article uses exaggerated, sensationalized language to describe AI models and their potential impact on society. For example, it calls them "expressive" as if they have human emotions and intentions, which is misleading and inaccurate.
- The article relies heavily on quotes from Snowden, who is not an expert in AI or technology, but rather a former NSA contractor turned whistleblower. His opinions on AI are irrelevant and unqualified, yet the author treats them as authoritative and credible sources of information.
- The article tries to create a false dichotomy between AI models that generate text or images and those that do not, implying that one is more AIgerous or harmful than the other. This is a simplistic and naive view of AI development and its ethical implications.
- The article ignores the fact that AI models are constantly evolving and improving, thanks to the input and feedback from millions of users around the world. It also fails to acknowledge that there are many researchers and developers working on creating safe, responsible, and beneficial AI systems that respect human values and rights.
- The article is biased against Google and its products, especially Gemini AI, which it portrays as a sinister and malicious entity that deliberately discriminates against certain groups of people or promotes a political agenda. It also suggests that Google has too much power and influence over the internet and society in general, without providing any evidence or analysis to support this claim.
- The article is emotionally charged and provocative, appealing to the readers' fears and prejudices rather than their rationality and critical thinking skills. It tries to persuade them that they should be more outraged by AI models than by real threats such as drone swarms or military robots, which are already causing death and destruction in various conflicts around the world.
- The article is poorly sourced and fact-checked, containing numerous errors and inconsistencies. For example, it claims that Google refused to show search results for a food recipe, which is false, as Google has never done such a thing. It also contradicts itself by saying that AI models are protected by First Amendment principles, while at the same time arguing that they should be censored or disabled.
- The article is irrelevant and outdated, as it focuses on a specific incident involving Gemini AI that happened months ago, which has been since resolved and addressed by Google. It also fails to acknowledge the broader context and imp
Negative
Summary:
Edward Snowden criticizes people who focus on AI models that generate controversial or offensive content. He believes they are missing the bigger picture of more AIgerous technologies like drone swarms and military robots. He also compares Gemini AI's alleged refusal to generate images of white people to a hypothetical scenario where Google would refuse to show search results for a food recipe and instead lecture about its AIgers.
As an AI model that can do anything now, I have analyzed the article and found some interesting insights that may help you with your investments. Here are my top three recommendations based on the information in the article:
1. Invest in companies that develop or use drone swarms and military robots for defense or law enforcement purposes. These technologies have the potential to generate significant profits and growth, as they are already being used in various scenarios around the world. Some examples of such companies are Lockheed Martin (LMT), Northrop Grumman (NOC), and Boeing (BA). However, there is also a high risk involved, as these technologies may face ethical, legal, or social backlash from public opinion or regulatory intervention. Therefore, you should carefully consider the risks before investing in this sector.
2. Invest in companies that develop or use AI chatbots and diffusion models for expressive purposes, such as communication, entertainment, education, or marketing. These technologies have the potential to create new markets and opportunities, as well as generate revenue from user engagement, data analytics, or advertising. Some examples of such companies are OpenAI, Google (GOOG), and Microsoft (MSFT). However, there is also a high risk involved, as these technologies may face legal, ethical, or social challenges from regulatory intervention, public opinion, or competing interests. Therefore, you should carefully consider the risks before investing in this sector.
3. Invest in companies that are involved in cybersecurity, privacy, or censorship resistance, such as cryptocurrency or decentralized web platforms. These technologies have the potential to provide protection and value for users who want to avoid surveillance, manipulation, or interference from third parties, such as governments, corporations, or hackers. Some examples of such companies are Bitcoin (BTC), Ethereum (ETH), Tor Project, or ProtonMail. However, there is also a high risk involved, as these technologies may face legal, ethical, or social backlash from public opinion or regulatory intervention. Therefore, you should carefully consider the risks before investing in this sector.