How AI Can Bolster Sustainable Investing

Jul 31, 2023

Sustainability investors are turning to AI solutions to help achieve their ESG objectives and financial performance, while considering potential risks.

Key Takeaways

  • AI applications for sustainable investing include using machine learning to improve the accuracy of ESG metrics and AI-powered satellite imaging to detect negative environmental patterns.
  • The risks of using AI tools for ESG investing include data privacy, reliability and model bias.
  • When assessing AI’s utility for sustainable investing, investors should consider why a problem necessitates AI as the best solution, and the data and governance required.

Recent advancements in artificial intelligence—including machine learning and natural language processing—can be powerful tools for sustainability-focused investors, as an array of AI-powered solutions promise to help investors navigate companies’ financial performance prospects and environmental, social and governance (ESG) factors.


“The integration of AI into sustainable investing could mark a profound turning point in investors’ ability to navigate the complex web of ESG factors,” says Matthew Slovik, Head of Global Sustainable Finance at Morgan Stanley. “By harnessing AI’s analytical capabilities, investors can identify companies with strong ESG performance, mitigate risks and shape portfolios that better align with sustainability objectives.”


Investors interested in using AI in support of their sustainable investing and ESG objectives should consider the types of applications currently in market, as well as the potential risks and key questions to ask when assessing these new tools.


AI Applications in Sustainable Investing


Some investors are already adopting AI technologies for various purposes. Three of the most prominent examples include:


Predictive models to fill ESG disclosure gaps: Machine learning, a type of AI that takes inspiration from processes in the human brain, offers the potential to improve the accuracy of ESG metrics. These metrics are in high demand: While 88% of asset owners in a Morgan Stanley Institute for Sustainable Investing survey deemed ESG reporting important in selecting an asset manager, only 39% of asset managers offered this reporting and disclosure.1 Meanwhile, among corporates, only 35% of listed companies globally disclose at least some of their greenhouse gas emissions—an increasingly critical risk metric for investors.2

Predictive modeling, which relies on machine learning techniques and uses publicly available data, is helping investors fill these gaps in sustainability disclosures. Currently, when estimating greenhouse gas emissions, investors often calculate an industry average for companies that do not disclose emissions, or use simple linear extrapolation to model emission values based on parameters disclosed by companies. By contrast, machine-learning models can identify additional parallels in the data, based not only on industry but factors such as location, revenue breakdown and types of products and services. Identifying these data relationships may result in more accurate estimates.


Natural-language processing to gauge sentiment and risk: Natural-language processing (NLP) gives investors the ability to analyze thousands of media and other sources of information daily, eliminating the shortcomings of manual data collection and risk assessment, including subjectivity and limited capacity. This approach can be used to identify companies with controversial ESG practices—such as allegations of human rights abuses or corruption and bribery—that companies may not report themselves but could be material information for investors. NLP tools can be used to detect and collect online criticisms and allegations in real time, providing investors with key information on public perception and its potential impact on company stock prices.


Satellite technologies to assess environmental risks: AI-powered satellite sensors can determine companies’ exposure to physical risks or negative environmental impacts, helping investors make better decisions. These solutions can quickly and accurately analyze a large number of inputs, such as infrared images, to detect patterns.

Currently, ESG ratings agencies use imagery of deforestation and reforestation to assess the quality of voluntary carbon offsets. Another powerful application, the Methane Alert and Response System (MARS) monitors leaks of methane, a greenhouse gas more than 25 times as potent as carbon dioxide in trapping heat in the atmosphere. Launched at the UN Climate Change Conference in 2022, MARS will analyze information gathered by worldwide mapping satellites to detect concentrated areas of methane emissions and attribute them to a specific source. This data will be made public, giving investors the opportunity to directly engage with a company on its strategy for reducing methane emissions.3

“Using satellite imagery for sustainability-related geospatial analysis is increasingly an area of focus for Morgan Stanley’s Sustainable Insights Lab, our team of data scientists that uses advanced analytics to help develop evidence-based sustainable investment strategies across asset classes,” says Slovik. “The real-time ability to better understand, for example, how rising sea levels may increase flood risk for low-lying coastal real estate or how a company’s physical assets may be exposed to wildfire damage helps the firm and our clients make better-informed decisions.”


Understanding the Risks

When exploring AI tools, ESG investors should consider potential risks, including data privacy and security. Since AI models require a wide range of data that could include personally identifiable and sensitive information, there are concerns that AI can be used to track private behavior, which could even become publicly accessible through reverse engineering.


There are also risks related to the potential lack of reliability and accountability from AI-generated information. Despite the flurry of publicity surrounding generative AI, information generated from large language models may not be accurate, or transparent in its sourcing. Without safeguards ensuring transparency and accountability, AI tools could be used to spread discriminatory language or even promote misinformation that harms the integrity of the global financial system. Necessary safeguard mechanisms include outlining clear responsibility for the outcomes of models and data, as well as the ability and willingness to share models’ logic and outputs. Bias can also creep into the system. If training data is not representative of a population and is tainted by algorithmic or human biases, the output data may be biased too.


Regulation will be important in helping reduce these risks. “Governments are starting to craft legislation on the use of the technology, albeit at a much slower pace than the innovation and development of AI models and use cases,” says Jamie Martin, Head of Morgan Stanley’s EMEA Sustainability Office. “Leading the charge is the European Union, which is close to passing one of the world's first laws4 governing artificial intelligence. Proponents of the EU legislation expect it to lead the way for other global governing bodies to follow.”


Questions for Investors

Investors should consider the following questions when assessing AI tools for sustainable investing:


1) What problem is AI trying to solve? “Being clear on the problem statement and the market size of the opportunity that is being addressed is paramount,” says Sanghamitra Karra, Head of EMEA Inclusive Ventures Group at Morgan Stanley. “What is interesting and unique about the design and execution of the AI solution to solve this problem?”


2) What data sets are being used? An AI algorithm is only as good as the data used. Establish what data is available, what the unique advantages of the specific data set might be, and why the AI solution or company has the right to that data. Additionally, understand if the developers have already done the work to understand any bias in the data and address them when designing the tool.


3) What data and model governance mechanisms are in place? Confirm that strong controls ensure safety and ethics. Reported and tracked instances of AI misuse has increased by 26 times since 2012.5 Extra safeguards are needed for solutions in areas of special risk to society, such as healthcare, education, critical infrastructure, administration of justice and the use of sensitive personal data.


4) Why is this company the right owner of this solution? “Investors should be wary of a company that describes itself as an AI firm for sustainability or sustainable investing but has an insufficient AI background or technical ability and experience to build solutions,” Karra says. AI-powered solutions for sustainable investing should demonstrate the ability to scale with the proper management team.

More Insights