Center for Ethics in the Management of Emerging Technologies

Ethics Event

About Us

This center addresses the challenges of managing the uncharted ethical and moral dilemmas that spring directly from emerging technologies.

The Rensselaer Lally School of Management has consistently focused on the management of technology in its scholarship, educational programs, and outreach. This focus provides the School a distinctive niche and connects naturally to the technological roots of the Institute. The commercialization of new technologies is a perennial high priority for R&D-intensive companies. Ethical dilemmas that arise are a neglected area of scholarship and education. Therefore, the Lally School has created an initiative for research and education in this exciting and crucially important area.

Our Activities

The Center aims to create cutting edge innovation in the ethics of emerging technologies and integrate these innovations into education at Lally. 

Our current activities include:

  • Highlighting ethics-based student projects and research
  • Ethics-based content in Lally School courses
  • Organizing faculty research seminars to disseminate cutting edge research in emerging technologies
  • Drawing actionable insights from novel research in the fields of ethics and emerging technologies

Building on these efforts, our exciting future plans include:

  • White papers on the ethics of emerging technologies
  • Grants to aid faculty in developing ethics-based teaching materials and research projects
  • Hosting visiting faculty experts in ethics

 

Affiliated Faculty

Chari, Murali. "Strengthening ethical guardrails for emerging technology businesses." 

Journal of Ethics in Entrepreneurship and Technology 3.2 (2023): 127-142.

Because of the evolutionary process by which laws and regulations evolve, and the newness of emerging technologies which leaves boards of directors insufficiently knowledgeable, ethical guardrails for emerging technology businesses are often inadequate. The paper develops recommendations to strengthen the ethical guardrails with respect to the regulations, boards of directors, and the executives of emerging technology businesses.

Identifying the risk culture of banks using machine learning
Abena Owusu, Aparna Gupta
International Journal of Managerial Finance, 20, 2024, pp.377-405.

Culture, ethics and governance are crucial for risk decisions and outcomes in the banking industry, which is a highly regulated industry. Unacceptable culture within banks can become a major contributor to financial crises, and so regulators must play a greater role in judging how culture drives banks’ behavior and impact on society. Assessing culture and its impact on a bank’s governance is difficult, therefore this work utilizes the advances in natural language processing and textual analytics to assess and classify banks for their risk culture profiles.  

Agarwal, A., Gupta, A., Kumar, A. and Tamilselvam, S.G., 2019. Learning risk culture of banks using news analytics. European Journal of Operational Research, 277(2), pp.770-783.

Gupta, A., and Owusu, A., Regulating Risk Culture in the Insurance Industry Using Machine Learning, under review with Journal of Risk and Insurance, February 2025

 

Ethical and safety considerations in automated fake news detection
Benjamin D. Horne, Dorit Nevo, Susan L. Smith
Behaviour and Information Technology, 2023.

This paper highlights ethical issues in automated fake news detection and calls for caution when deploying tools to automatically detect mis/disinformation in real-life settings. We implement three proposed detection models from the literature that were trained on over 381,000 news articles published over six months. We test each of these models using a test dataset constructed from over 140,000 news articles published a month after each model’s training data. We used these data to explore and understand two specific problems with algorithmic fake news detection, namely Bias and Generalisability. Based on our analysis, we discuss the importance of understanding how ground truth is determined, how operationalisation may perpetuate bias, and how the simplification of models may impact the validity of predictions.

How topic novelty impacts the effectiveness of news veracity interventions
Dorit Nevo, Benjamin D. Horne
Communications of the ACM, 65, 2022, pp.68-75.

This article provides insight on the effectiveness of fake news signals in novel news situations, focusing on AI generated statements pertaining to the accuracy and reliability of an article. Specifically, we ask: when a novel situation arises, will AI interventions be more effective than when those same interventions are used on news articles about more  familiar situations? We find that interventions are significantly more effective in novel news situations, implying the need for faster interventions and better understanding of the acceptance and effectiveness of interventions under different settings.

Tailoring heuristics and timing AI interventions for supporting news veracity assessments
Benjamin D. Horne, Dorit Nevo, Sibel Adali, Lydia Manikonda, Clare Arrington
Computers in Human Behavior Reports, 2, 2020.

In this work, we seek explore two important elements of AI tools’ design: the timing of news veracity interventions and the format of the presented interventions. Specifically, in two sequential studies, using data collected from news consumers through Amazon Mechanical Turk (AMT), we study whether there are differences in their ability to correctly identify fake news under two conditions: when the intervention targets novel news situations and when the intervention is tailored to specific heuristics. We find that in novel news situations users are more receptive to the advice of the AI, and further, under this condition tailored advice is more effective than generic one. We link our findings to prior literature on confirmation bias and we provide insights for news providers and AI tool designers to help mitigate the negative consequences of misinformation.

Rating reliability and bias in news articles: Does AI assistance help everyone?
Benjamin D. Horne, Dorit Nevo, John O’Donovan, Jin Hee Cho, Sibel Adalı
Proceedings of the 13th International Conference on Web and Social Media, ICWSM 2019, 2019, pp.247-256

 With the spread of false and misleading information in current news, many algorithmic tools have been introduced with the aim of assessing bias and reliability in written content. However, there has been little work exploring how effective these tools are at changing human perceptions of content. To this end, we conduct a study with 654 participants to understand if algorithmic assistance improves the accuracy of reliability and bias perceptions, and whether there is a difference in the effectiveness of the AI assistance for different types of news consumers. We find that AI assistance with feature-based explanations improves the accuracy of news perceptions. However, some consumers are helped more than others. Specifically, we find that participants who read and share news often on social media are worse at recognizing bias and reliability issues in news articles than those who do not, while frequent news readers and those familiar with politics perform much better. We discuss these differences and their implication to offer insights for future research.

Ivanov, A., Tacheva, Z., Alzaidan, A., Souyris, S., & England, A. C. (2023). Informational value of visual nudges during crises: Improving public health outcomes through social media engagement amid COVID-19. Production and Operations Management, 32, 2400–2419.

The paper explores how visual nudges on social media can ethically influence public health behavior during crises. By analyzing over 32,000 Instagram images from 117 universities, the study shows that non-coercive visual cues significantly reduced COVID-19 positivity rates—highlighting the power of nudges interventions in ethically managing public compliance.

“Popular Mobile Apps in The Pandemic Era: A Glimpse at Privacy and Competition,” Liad Wagman with Ginger Zhe Jin and Ziqiao Liu, Competition Policy International, 2023.

“Towards A Technological Overhaul of American Antitrust,” with Ginger Zhe Jin and Daniel Sokol, American Bar Association, Antitrust, 37(1), 2022.

“The Persisting Effects of the EU General Data Protection Regulation on Technology Venture Investment,” with Jian Jia and Ginger Zhe Jin, American Bar Association's Antitrust Source, 2021.

“Data Regulation and Technology Venture Investment: What Do We Learn From GDPR?,” with Jian Jia and Ginger Zhe Jin, Competition Policy International, 2021.

“Local Network Effects in the Adoption of a Digital Platform,” with Jin-Hyuk Kim, Peter Newberry, and Ran Wolff, Journal of Industrial Economics, 70(3), 493-524, 2022.

“The Short-Run Effects of GDPR on Technology Venture Investment,” with Jian Jia and Ginger Zhe Jin, Marketing Science, 40(4), 593-812, 2021.

This body of work explores the complex intersection of data privacy, digital competition, and technology-driven markets. Through rigorous empirical analysis and policy-focused insights, it examines how regulations like the European Union’s General Data protection Regulation, and shifting digital behaviors—including during the pandemic—impact venture investment, app ecosystems, and antitrust enforcement.

 

Our Team

Kyle Roth, Digital Media Specialist

Tanya Singh, Assistant Professor in Ethics of Emerging Technologies

Contact

Lally School of Management
Rensselaer Polytechnic Institute
110 8th Street, Pittsburgh Building, Troy, NY 12180
(518) 276-2812

Back to top