Exploring AI at STScI
About this Article
Michelle Ntampaka (mntampaka[at]stsci.edu), Neill Reid (inr[at]stsci.edu)Published September 17, 2025
Artificial Intelligence (AI) tools and techniques are playing a growing role in many aspects of 21st-century life, including research and science support. In particular, generative AI tools, such as OpenAI’s ChatGPT, Anthropic’s Claude, and Microsoft Copilot offer opportunities to sort and summarize text, streamline code, and sift through data faster and more efficiently than humans. If effectively applied, generative AI tools can shoulder mundane, repetitive, even previously intractable tasks, freeing more time for creative efforts that only humans can lead.
However, there are also well-documented concerns centered on ethical issues, notably bias and data confidentiality. Staff at STScI are moving forward thoughtfully as we explore how to integrate these tools into our procedures and processes. Specifically, we have established an AI-use policy and implementation guidelines for the use of AI tools, aiming to enhance the quality and efficiency of our work while maintaining the trust of the community we serve.
While some tools and inputs are restricted (for example, CUI and ITAR data may not be used in generative AI), STScI has embraced a policy that allows staff to engage with approved generative AI tools to support their work for STScI. All staff have access to Microsoft CoPilot, and we are exploring the procurement of additional tools to support a larger range of use cases.
On September 22, we will launch the AI Community of Practice, a space for institute-wide exploration and discussion of AI tools. The Community of Practice will meet biweekly, with a format that combines structured presentations, unstructured discussions, and opportunities to test the capabilities and limitations of AI tools.
How Our Staff Members Use AI
In addition to considering tools and classification of input data, our AI policy requires that staff ask three questions before using generative AI tools and their outputs. These questions help each staff member evaluate their use of generative AI on a case-by-case basis.
Question 1: Would I stake my professional reputation on this?
For many tasks that are core to roles at STScI, AI-generated output should be treated the same as if you had produced every keystroke yourself. You should understand and be able to defend the AI-generated output, and it should stand up against critical analysis by a peer or manager. Do not use AI to bypass steps in personal professional development, nor to outsource critical elements of your role where your education and professional experience are crucial. However, there will be tasks where the level of precision necessary for the task is very low when compared to the cost of evaluating the AI output. In this case, you should evaluate the implications of AI making a mistake, weigh the cost of getting a “wrong” answer against the cost of evaluating the output, and should be able to defend the choice to take a shortcut.
Question 2: Do I have the expertise to critically evaluate this output?
For new users of Generative AI, it can be challenging to understand the risks and failure modes of tools. We recommend that users new to generative AI consult with the Artificial Intelligence and Machine Learning Community of Practice to build expertise needed to evaluate these outputs.
Question 3: Does this give me the appropriate level of ownership of the work?
When you use a Generative AI tool to support your work, the end product should be consistent with your voice, your style, your values, and STScI’s core values of service, integrity, legacy, and excellence.
Important Caveats: Using AI to Write Proposals
While these questions establish guidelines for our work, norms and guidelines for submissions to external programs vary. As specific examples, any proposer is permitted to use generative AI as an aid in writing Hubble or Webb proposals, but they must acknowledge its use; and reviewers are specifically precluded from using AI to help write their reviews. NSF has very similar guidelines for its proposers and reviewers. STScI staff are required to identify and comply with relevant guidelines when submitting materials to any such organizations or programs.
The Road Ahead
Feasibility studies are currently underway to automate repetitive and tedious tasks. Through these feasibility studies, we have developed an internal AI Playbook that lays out best practices for identifying tasks and automating them using AI. The playbook serves as a framework for evaluating whether AI is the best tool for automating work and building appropriate success metrics to determine whether a new tool is performing adequately.
Generative Artificial Intelligence tools are already present in many aspects of our daily lives. At STScI, we are continuing to incorporate these tools in our work to best exploit their advantages without undermining the community’s trust in our data products, processes, and procedures. Our work is ongoing and we will keep everyone informed of our progress.
About this ArticleThis article is based on programs and policies developed by John Wu (Applied AI Scientist), Patrick Taylor (AI Task Force), Michelle Ntampaka (AI Task Force), Josh Peek (AI Task Force), Ray Gauss (AI Task Force), and Mercedes López-Morales (AI Task Force).End callout
News Center Prefooter
Inbox Astronomy
Sign up to receive the latest news, images, and discoveries about the universe:
Contact our News Team
Ask the News Team
Contact our Outreach Office
Ask the Outreach Office
