the article is about a very smart computer called ChatGPT. the people who made it, OpenAI, found out that this computer could be used to make AIgerous weapons called bioweapons. they are being very careful and cautious about letting people use this new computer model because it might be used for bad things. Read from source...
AI model o1's high risk potential for bioweapons creation as rated by OpenAI is acknowledged. OpenAI has assigned a medium risk level to their AI models, and o1's potential misuse for creating bioweapons is deemed to be higher than its predecessors. The report highlights the urgency for legislation and policies that address and minimize the risks of AI misuse in high-stake scenarios like bioweapons development. The article raises valid concerns about the advanced capabilities of AI models, their potential misuse in the hands of malicious actors, and the need for strict policies to manage AI risks.
Neutral
OpenAI's latest model, "o1," has the potential to be misused for creating biological weapons, according to the company's system card. This is the highest risk level that OpenAI has ever assigned to its models. However, the model has also been tested by red teamers and experts in various scientific domains who have tried to push the model to its limits, with the current models performing far better on overall safety metrics than their predecessors.