Center for Ethics in the Management of Emerging Technologies

Ethics Event

About Us

This center addresses the challenges of managing the uncharted ethical and moral dilemmas that spring directly from emerging technologies.

The Rensselaer Lally School of Management has consistently focused on the management of technology in its scholarship, educational programs, and outreach. This focus provides the School a distinctive niche and connects naturally to the technological roots of the Institute. The commercialization of new technologies is a perennial high priority for R&D-intensive companies. Ethical dilemmas that arise are a neglected area of scholarship and education. Therefore, the Lally School has created an initiative for research and education in this exciting and crucially important area.

Our Activities

The Center aims to create cutting edge innovation in the ethics of emerging technologies and integrate these innovations into education at Lally. 

Our current activities include:

  • Highlighting ethics-based student projects and research
  • Ethics-based content in Lally School courses
  • Organizing faculty research seminars to disseminate cutting edge research in emerging technologies
  • Drawing actionable insights from novel research in the fields of ethics and emerging technologies

Building on these efforts, our exciting future plans include:

  • White papers on the ethics of emerging technologies
  • Grants to aid faculty in developing ethics-based teaching materials and research projects
  • Hosting visiting faculty experts in ethics

 

Affiliated Faculty

Chari, Murali. "Strengthening ethical guardrails for emerging technology businesses." 

Journal of Ethics in Entrepreneurship and Technology 3.2 (2023): 127-142.

Because of the evolutionary process by which laws and regulations evolve, and the newness of emerging technologies which leaves boards of directors insufficiently knowledgeable, ethical guardrails for emerging technology businesses are often inadequate. The paper develops recommendations to strengthen the ethical guardrails with respect to the regulations, boards of directors, and the executives of emerging technology businesses.

Identifying the risk culture of banks using machine learning
Abena Owusu, Aparna Gupta
International Journal of Managerial Finance, 20, 2024, pp.377-405.

Culture, ethics and governance are crucial for risk decisions and outcomes in the banking industry, which is a highly regulated industry. Unacceptable culture within banks can become a major contributor to financial crises, and so regulators must play a greater role in judging how culture drives banks’ behavior and impact on society. Assessing culture and its impact on a bank’s governance is difficult, therefore this work utilizes the advances in natural language processing and textual analytics to assess and classify banks for their risk culture profiles.  

Agarwal, A., Gupta, A., Kumar, A. and Tamilselvam, S.G., 2019. Learning risk culture of banks using news analytics. European Journal of Operational Research, 277(2), pp.770-783.

Gupta, A., and Owusu, A., Regulating Risk Culture in the Insurance Industry Using Machine Learning, under review with Journal of Risk and Insurance, February 2025

 

Dr. Kuruzovich’s current work in AI ethics includes co-authoring the paper “Reducing Subgroup Differences in Personnel Selection through the Application of Machine Learning” (Personnel Psychology, 2023), which received the 2025 SIOP Jeanneret Award for Excellence in the Study of Individual or Group Assessment and was among the Top 10 most-cited papers published by Personnel Psychology in 2023. He is also a co-author of “Whither Bias Goes, I Will Go: An Integrative, Systematic Review of Algorithmic Bias Mitigation” (forthcoming, Journal of Applied Psychology), a comprehensive review of algorithmic fairness strategies across personnel selection.

Zhang, N., Wang, M., Xu, H., Koenig, N., Hickman, L., Kuruzovich, J., Ng, V., Arhin, K., Wilson, D., Song, Q. C., Tang, C., Alexander, L., & Kim, Y. (2023). Reducing subgroup differences in personnel selection through the application of machine learning. Personnel Psychology, 76(4), 1125–1159. https://doi.org/10.1111/peps.12593. (2025 SIOP Jeanneret Award for Excellence in the Study of Individual or Group Assessment; Top 10 most-cited papers published by Personnel Psychology in 2023)

Hickman, L., Huynh, C., Gass, J., Booth, B., Kuruzovich, J., & Tay, L. (in press). Whither Bias Goes, I Will Go: An Integrative, Systematic Review of Algorithmic Bias Mitigation. Journal of Applied Psychology. Advanced online publication. https://doi.org/10.1037/apl0001255.

  1. Y. Wang, N. Langer, and A. Gopal. The Effect of Gender on Willingness to Bid for Competitive and Uncertain Information Technology Work. 2024. Journal of Management Information Systems, 41(4), pp. 1198-1229.
  2. N. Langer, R.D. Gopal, and R. Bapna. 2020. Onward and Upward? An Empirical Investigation of Gender and Promotions in Information Technology Services. Information Systems Research, 31(2), pp. 383–398. Awarded a Best Publication of 2020 by the Senior Scholars Consortium, Association of Information Systems at the International Conference on Information Systems, 2021. Also features in the Responsible Research in Business and Management (RRBM) Honor Roll (2023).
  3. W.G. Obenauer and N. Langer. 2019. Inclusion is not a Slam Dunk: A Study of Discrimination in Leadership within the Context of Athletics. The Leadership Quarterly, 30(6), 101334.
  1. Brook Zeleke, Amish Soni, and Lydia Manikonda. 2025. Human or GenAI? Characterizing the Linguistic Differences between Human-Written and LLM-Generated Text. In Companion Publication of the 17th ACM Web Science Conference 2025 (Websci Companion '25). Association for Computing Machinery, New York, NY, USA, 34–37. https://doi.org/10.1145/3720554.3735864

    This study investigates and characterizes the distinct linguistic features that differentiate human-written text from text generated by Generative AI (GenAI) systems. The study addresses growing concerns regarding the reliability, appropriateness, and ethical implications of LLM-generated content, particularly in online discussion platforms like Reddit where original thought is encouraged. The study uses OpenAI's GPT models as well as human-written question-answer pairs to investigate these goals. Identified insights include that human responses are typically shorter, informal, often using analogies, and tend to be conclusive. In contrast, GenAI responses are generally longer, formal, cite existing laws or policies, and are less likely to reach definitive conclusions. Furthermore, an experiment demonstrated that human annotators possess an innate ability to differentiate between human and GenAI text, achieving high accuracy (89% to 93%) in identifying the author, aligning with the "machine heuristic" theory. These findings suggest a promising direction for developing automated approaches to detect LLM-generated text, though the study acknowledges limitations as well as ethical concerns in different applications leveraging LLMs to generate text.

  2. Joseph Sebastian, Khadija Ali Vakeel, Lydia Manikonda, and T Ravichandran. 2025. Navigating Privacy and Engagement in the Digital Age. In Companion Publication of the 17th ACM Web Science Conference 2025 (Websci Companion '25). Association for Computing Machinery, New York, NY, USA, 30–33. https://doi.org/10.1145/3720554.3735863

    With the unprecedented adoption of genAI systems in our everyday lives, there has been a lot of ethical issues that stem from data sharing and privacy perspectives that hasn’t received a lot of attention from the research community. This study explores how users share information on online platforms, specifically comparing interactions with human-human (e.g., WhatsApp) versus human-AI (e.g., ChatGPT) settings. It investigates the influence of privacy concerns and cognitive absorption on information-sharing behavior. Through a quantitative analysis of survey responses, the study reveals that users tend to share more diverse information with AI agents, even when privacy concerns are present, compared to human interactions. The research also highlights differences in communication styles and topics across these two platform types, suggesting a preference for AI for certain types of information sharing.

  3. Niharika Jain, Alberto Olmo, Sailik Sengupta, Lydia Manikonda, Subbarao Kambhampati, Imperfect ImaGANation: Implications of GANs exacerbating biases on facial data augmentation and snapchat face lenses, Artificial Intelligence, Volume 304, 2022, 103652, ISSN 0004-3702, https://doi.org/10.1016/j.artint.2021.103652.

    This paper explores how Generative Adversarial Networks (GANs), a type of  a type of artificial intelligence, can exacerbate existing biases in datasets, particularly concerning gender and skin tone in facial imagery. This work demonstrate that GANs, when generating new images or transforming existing ones, tend to underrepresent minority groups and amplify the features of majority groups present in the original training data, a phenomenon known as mode collapse. Through human subject studies and analysis with commercial AI systems, the research shows that generated images of engineering professors become more masculine and lighter-skinned than the original dataset, and that popular applications like Snapchat's gender-swap lens disproportionately lighten the skin tones of people of color. The paper serves as a cautionary tale for practitioners, highlighting the ethical implications of using GANs for data augmentation and stressing the importance of recognizing and addressing these amplified biases in real-world applications.

  4. Yelena Mejova*, Lydia Manikonda*, "Comfort Foods and Community Connectedness: Investigating Diet Change during COVID-19 using YouTube Videos on Twitter", ICWSM, 2023.

    This paper explores how COVID-19 lockdowns influenced dietary habits by analyzing YouTube video content shared on Twitter. Researchers examined macronutrient changes in food videos and shifts in food-related vocabulary before and during the pandemic. The study found a decrease in energy, fat, and saturated fat in overall content, but a concerning increase in sodium for areas with higher African American populations. Furthermore, the analysis revealed a shift from influencer-focused content to practical recipes and comfort foods, suggesting community support and coping mechanisms played a significant role in online food discussions during the pandemic. The insights also highlight the importance of ethical issues surrounding the reliance on analyzing publicly available social media data to make decisions at a larger-scale as there might be biases and limitations due to the set of users participating on those platforms.

Ethical and safety considerations in automated fake news detection
Benjamin D. Horne, Dorit Nevo, Susan L. Smith
Behaviour and Information Technology, 2023.

This paper highlights ethical issues in automated fake news detection and calls for caution when deploying tools to automatically detect mis/disinformation in real-life settings. We implement three proposed detection models from the literature that were trained on over 381,000 news articles published over six months. We test each of these models using a test dataset constructed from over 140,000 news articles published a month after each model’s training data. We used these data to explore and understand two specific problems with algorithmic fake news detection, namely Bias and Generalisability. Based on our analysis, we discuss the importance of understanding how ground truth is determined, how operationalisation may perpetuate bias, and how the simplification of models may impact the validity of predictions.

How topic novelty impacts the effectiveness of news veracity interventions
Dorit Nevo, Benjamin D. Horne
Communications of the ACM, 65, 2022, pp.68-75.

This article provides insight on the effectiveness of fake news signals in novel news situations, focusing on AI generated statements pertaining to the accuracy and reliability of an article. Specifically, we ask: when a novel situation arises, will AI interventions be more effective than when those same interventions are used on news articles about more  familiar situations? We find that interventions are significantly more effective in novel news situations, implying the need for faster interventions and better understanding of the acceptance and effectiveness of interventions under different settings.

Tailoring heuristics and timing AI interventions for supporting news veracity assessments
Benjamin D. Horne, Dorit Nevo, Sibel Adali, Lydia Manikonda, Clare Arrington
Computers in Human Behavior Reports, 2, 2020.

In this work, we seek explore two important elements of AI tools’ design: the timing of news veracity interventions and the format of the presented interventions. Specifically, in two sequential studies, using data collected from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets novel news situations and when the intervention is tailored to specific heuristics. We find that in novel news situations users are more receptive to the advice of the AI, and further, under this condition tailored advice is more effective than generic one. We link our findings to prior literature on confirmation bias and we provide insights for news providers and AI tool designers to help mitigate the negative consequences of misinformation.

Rating reliability and bias in news articles: Does AI assistance help everyone?
Benjamin D. Horne, Dorit Nevo, John O’Donovan, Jin Hee Cho, Sibel Adalı
Proceedings of the 13th International Conference on Web and Social Media, ICWSM 2019, 2019, pp.247-256

 With the spread of false and misleading information in current news, many algorithmic tools have been introduced with the aim of assessing bias and reliability in written content. However, there has been little work exploring how effective these tools are at changing human perceptions of content. To this end, we conduct a study with 654 participants to understand if algorithmic assistance improves the accuracy of reliability and bias perceptions, and whether there is a difference in the effectiveness of the AI assistance for different types of news consumers. We find that AI assistance with feature-based explanations improves the accuracy of news perceptions. However, some consumers are helped more than others. Specifically, we find that participants who read and share news often on social media are worse at recognizing bias and reliability issues in news articles than those who do not, while frequent news readers and those familiar with politics perform much better. We discuss these differences and their implication to offer insights for future research.

Ivanov, A., Tacheva, Z., Alzaidan, A., Souyris, S., & England, A. C. (2023). Informational value of visual nudges during crises: Improving public health outcomes through social media engagement amid COVID-19. Production and Operations Management, 32, 2400–2419.

The paper explores how visual nudges on social media can ethically influence public health behavior during crises. By analyzing over 32,000 Instagram images from 117 universities, the study shows that non-coercive visual cues significantly reduced COVID-19 positivity rates—highlighting the power of nudges interventions in ethically managing public compliance.

“Popular Mobile Apps in The Pandemic Era: A Glimpse at Privacy and Competition,” Liad Wagman with Ginger Zhe Jin and Ziqiao Liu, Competition Policy International, 2023.

“Towards A Technological Overhaul of American Antitrust,” with Ginger Zhe Jin and Daniel Sokol, American Bar Association, Antitrust, 37(1), 2022.

“The Persisting Effects of the EU General Data Protection Regulation on Technology Venture Investment,” with Jian Jia and Ginger Zhe Jin, American Bar Association's Antitrust Source, 2021.

“Data Regulation and Technology Venture Investment: What Do We Learn From GDPR?,” with Jian Jia and Ginger Zhe Jin, Competition Policy International, 2021.

“Local Network Effects in the Adoption of a Digital Platform,” with Jin-Hyuk Kim, Peter Newberry, and Ran Wolff, Journal of Industrial Economics, 70(3), 493-524, 2022.

“The Short-Run Effects of GDPR on Technology Venture Investment,” with Jian Jia and Ginger Zhe Jin, Marketing Science, 40(4), 593-812, 2021.

This body of work explores the complex intersection of data privacy, digital competition, and technology-driven markets. Through rigorous empirical analysis and policy-focused insights, it examines how regulations like the European Union’s General Data protection Regulation, and shifting digital behaviors—including during the pandemic—impact venture investment, app ecosystems, and antitrust enforcement.

 

Our Team

Kyle Roth, Digital Media Specialist

Tanya Singh, Assistant Professor in Ethics of Emerging Technologies

Contact

Lally School of Management
Rensselaer Polytechnic Institute
110 8th Street, Pittsburgh Building, Troy, NY 12180
(518) 276-2812

Back to top