More

    OpenAI Clarifies Stance on Military Use Amid Microsoft’s DALL-E Pitch

    In October 2023, Microsoft Azure presented its version of DALL-E, an image generator developed by OpenAI, to the US Department of Defense (DoD) at an “AI Literacy” training seminar. The presentation, which has been made public by The Intercept, suggested that DALL-E could be utilized to train battlefield tools through simulation. This revelation has sparked some confusion and concern, given OpenAI’s stated policies regarding the use of its tools for military purposes.

    The presentation was given under the Azure OpenAI (AOAI) umbrella, a joint product resulting from the partnership between Microsoft and OpenAI, which combines Microsoft’s cloud computing capabilities with OpenAI’s generative AI technology. The presentation deck, which prominently featured OpenAI’s logo and mission statement, “Ensure that artificial general intelligence (AGI) benefits humanity,” outlined various potential applications of AOAI for the DoD, ranging from routine machine learning tasks like content analysis and virtual assistants to more controversial uses, such as “Using the DALL-E models to create images to train battle management systems.”

    OpenAI’s usage guidance has historically prohibited the use of its models for military development. However, in January, The Intercept noted that OpenAI had removed the terms “military” and “warfare” from its policies page, now only prohibiting the use of “our service to harm yourself or others,” including the development or use of weapons. When questioned about this change, OpenAI stated that it was intended to allow for certain military use cases that align with the company’s mission, such as defensive measures and cybersecurity, which Microsoft has been advocating for separately. OpenAI maintained that other applications, such as weapons development, injury to others, and destruction of property, were still not permitted.

    The potential use of DALL-E to train battlefield management systems, as suggested in the Microsoft presentation, could be seen as conflicting with OpenAI’s stated policies, as it could lead to the development of weapons, injury to others, and destruction of property. Microsoft has clarified that the October 2023 pitch has not been implemented and that the examples in the presentation were intended to be “potential use cases” for AOAI.

    Liz Bourgeous, an OpenAI spokesperson, emphasized that OpenAI was not involved in the Microsoft presentation and reiterated the company’s policies, stating, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”

    This incident highlights the challenges of maintaining consistent policies across derivative versions of base technology, particularly in the context of partnerships and collaborations. Microsoft, a longtime contractor with the US Army, may find AOAI preferable for military use compared to OpenAI due to Azure’s enhanced security infrastructure. As the partnership between Microsoft and OpenAI continues, and with Microsoft’s ongoing endeavors with the DoD, it remains to be seen how OpenAI will differentiate between acceptable and unacceptable applications of its tools.

    LATEST ARTICLES

    RELATED ARTICLES

    LEAVE A COMMENT

    Please enter your comment!
    Please enter your name here

    spot_img