OpenAI’s new language generation model, GPT-4, is now trained on both images and text, according to the company’s president, Greg Brockman. However, Brockman declined to reveal any details about the source of the images used to train the model or other specifics about its training data.
GPT-4 is a highly advanced artificial intelligence language model that can generate human-like text in response to prompts. With the addition of image training, the model could potentially take a step further toward becoming a truly multimodal AI that can understand and interact with the world in a more human-like way.
Despite this exciting development, OpenAI’s decision to keep the GPT-4 data source information under wraps raises concerns about transparency and accountability, especially as large-scale AI models continue to play an increasingly important role in society. Experts have called for greater transparency in AI development to ensure that these systems are not biased or harmful.
OpenAI co-founder Ilya Sutskever said that making the GPT-4 data source – an AI language model – open source was a bad idea. This is because the technology behind artificial general intelligence (AGI), which is speculated to be extremely powerful, is still in its early stages. Sutskever believes that the release of training data in the past was a mistake and that competition is not the only concern.
0 Comments