OpenAI made a new tool that can copy someone's voice by using just 15 seconds of sound. This can be useful for helping people read or translate, but some people are worried it could also be used to trick others or steal jobs from actors who use their voices for work. Read from source...
1. The title is misleading and sensationalist, implying that the voice-cloning technology can clone anyone's voice in 15 seconds, which is not true. It requires 15 seconds of audio input from the target person to create a synthetic voice that resembles them.
2. The article does not provide any evidence or examples of how the Voice Engine works, what are the technical details behind it, and how it differs from other existing voice-cloning technologies. It only mentions that it is "emotive and realistic" without explaining what these terms mean or how they are measured.
3. The article focuses too much on the potential negative aspects of the technology, such as fraudulent activities and impact on voice actors, without acknowledging the positive applications and benefits that could arise from it. It also uses phrases like "potential misuse" and "raising concerns" without providing any concrete data or statistics to support these claims.
4. The article does not mention how OpenAI plans to address the ethical and social implications of the technology, nor does it provide any suggestions or recommendations for responsible use and regulation of synthetic voices. It only states that OpenAI is "seeking to initiate a discussion" without specifying what kind of discussion, who is involved, and what are the expected outcomes.
5. The article ends abruptly with an unrelated paragraph about Sora and Peking University, which seems to be copied from another source without proper attribution or context. It also does not explain how these developments are related to Voice Engine or why they are relevant for the readers.