Thesis and internship positions

User-centered prototyping and development of Ethical AI applications

Today there is a gap in tools for applying AI which are understandable, explainable, and accountable to various stakeholder groups. We seek to address this gap by developing user-centered tools, which communicate explanations and fairness for AI decisions in an appropriately understandable manner that has been validated by end-users themselves.

Example project investigations

  • Prototyping interfaces for communicating explainable AI results to end users. Which methods are best for which contexts/end users? How should AI and design parameters can be adjusted to enhance usability?
  • Development of explainable or fair AI applications following a rigorous approach, such as design science, and involving end users (user-centered design).


  • Background in AI concepts and methods, or strong willingness to learn
  • Documented experience with design and/or production of computer applications
  • A preferable applicant possesses software development, human-computer interaction, and/or interaction design experience
    • For human-computer interaction this will include documented experience with prototyping interfaces and relevant evaluation methods and metrics
    • For software development this will include documented experience developing software (preferably, web-based applications)

The AI Sustainability Center aims to not only apply existing AI methods, but also to advance the state-of-the-art techniques and rigorously investigate existing approaches from an ethical lens. We are seeking candidates who are looking to bridge the gap between the technical demands of AI innovation, and the needs of society stakeholders. To accomplish this, we will analyze and expand on the existing landscape of fair and explainable AI algorithms according to the needs dictated by society.

Example project investigations

  • Empirical comparison of fairness algorithms across ethically sensitive domains: What approaches enhance fairness the best and why? What are the performance vs. fairness trade-offs across various domains/contexts?
  • Developing and applying generalized state-of-the-art Fair and/or Explainable AI algorithms
  • Developing and applying state-of-the-art Fair or Explainable AI algorithms tailored for particular contexts, sectors, or industries (e.g. health care)


  • Background in AI concepts and methods, or strong willingness to learn
  • Documented experience with machine learning modeling and evaluation in the form of documented course work and/or practical work using programming languages (e.g. Python, R, Java, C++)

Machine Learning for the topics of explainable AI and fair AI

Societal investigations for the topics of explainable AI and fair AI

The AI Sustainability Center collaborates with experts, practitioners, and researchers across all disciplines where AI impacts society. We seek to better understand public, and other stakeholder perspectives on requirements for AI, such as explainability and fairness. We offer specialized projects to motivated researchers in social, legal, behavior, economic, and various technical studies in order to capture unique insights from a variety of diverse viewpoints.

Example project investigations

  • Investigating perceptions on AI fairness across various stakeholder and demographic groups through the use of survey methods and/or data crawling (e.g. Twitter data)
  • Establishing and validating protocols for AI fairness metrics that are applicable to a wide variety of contexts (e.g. high-risk and low-risk)


  • Background with relevant AI concepts and methods, or a strong willingness to learn
  • Familiarity with relevant qualitative and quantitative research methods

How to apply

Send your CV and a cover letter indicating the area you are interested in, why you want to join the AI Sustainability Center, which potential project you are interested in, and your relevant skills to