Responsible AI Report

By: Responsible AI Institute
  • Summary

  • Welcome to the RAI Report from the Responsible AI Institute. Each week we bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly.

    Support the show
    Visit out website at responsible.ai.

    © 2024 Responsible AI Report
    Show More Show Less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • Understanding AI Governance Structures with Megha Sinha, VP of AI/ML Practice at Genpact | EP 04
    Nov 21 2024

    In this episode of the Responsible AI Report, Patrick and Megha Sinha discuss the essential components of responsible AI governance. They explore the significant gap between AI ambitions and the resources available for implementing governance frameworks, emphasizing the need for organizations to establish clear ethical guidelines, accountability mechanisms, and cross-functional teams. Megha outlines an eight-step approach to building a responsible AI framework, highlighting the importance of transparency, bias mitigation, and continuous monitoring. The conversation also delves into the critical role of governance structures in ensuring accountability as global AI regulations evolve, and the necessity of incorporating responsible AI thinking from the design phase to prevent ethical and legal violations.

    Takeaways
    - 97% of organizations have set responsible AI goals, but 48% lack resources.
    - Establishing a code of conduct is critical for responsible AI.
    - Transparency is essential for building trust in AI systems.
    - Governance structures are vital for ensuring accountability.
    - Incorporate responsible AI thinking from the start of development.
    - Prevent ethical and legal violations by embedding responsible AI early.
    - Designing for explainability enhances accountability in AI.
    - Continuous monitoring is necessary for responsible AI frameworks.
    - Fostering a culture of responsible AI is crucial for success.
    - AI governance must adapt to evolving regulations.

    Learn more by visiting:
    https://www.genpact.com/
    https://www.linkedin.com/in/megha-sinha/

    Article Referenced: https://www.prnewswire.com/news-releases/97-of-ai-leaders-commit-to-responsible-ai-yet-nearly-half-lack-resources-to-achieve-the-necessary-governance-302252621.html

    Megha Sinha is an AI/ML leader with 15 years of expertise in shaping technology strategy and spearheading AI-driven transformations and a Certified AI Governance Professional from IAPP. As the leader of the AI/ML & Responsible AI Platform competency in the Global AI Practice, Megha has built high-performing teams across ML Engineering, ML Ops, LLM Ops, and Responsible AI to architect and scale robust platforms. Her leadership drives the strategic integration of AI technologies, ensuring the delivery of impactful, ethical solutions that align with enterprise goals and industry standards. She successfully spearheaded the end-to-end launch of an enterprise-grade Generative AI Knowledge Management product, driving product strategy, enabling go-to-market (GTM) execution, and establishing competitive pricing models. As a trusted advisor to Client CXOs, she is known for her strategic foresight, strategy realization through right implementation and leadership in technology strategy and AI/ML solution design. Her ability to navigate the complex AI landscape and guide organizations toward measurable business outcomes instills confidence in her clients. Her leadership has enabled successful partnerships with industry bodies like NASSCOM, fostering joint solutions with Dataiku and driving Responsible AI initiatives building partnerships to benefit clients. She has been recognized with the Women in Tech Leadership Award and is a thought leader in AI strategy and responsible AI. With numerous technical publications in IEEE journals, she shapes the conversation around AI scale using ML Ops, LLM Ops, ethics, governance, and the future of technology leadership, positioning her at the forefront of AI-driven business transformation.


    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    24 mins
  • The Role of Experts in AI Regulation with Dr. Richard Saldanha, Founding Member of IST's AI Special Interest Group, UK | EP 03
    Nov 7 2024

    In this episode of the Responsible AI Report, Patrick and Dr. Richard Saldanha discuss the EU's AI Code of Conduct and its collaborative approach to AI governance. They explore the importance of adaptability in regulations, the balance between innovation and safety, and the need for qualified personnel in regulatory bodies. Richard emphasizes the significance of a principles-based approach and the role of collaboration among stakeholders in shaping effective AI regulations.

    Takeaways

    • The EU AI Act aims to create a global model for AI regulations.
    • Collaboration between academia, industry, and civil society is crucial for effective AI governance.
    • A principles-based approach allows for flexibility in AI regulation.
    • Regulators should hire individuals with a strong understanding of technology.
    • Balancing regulation and innovation requires pragmatism from all parties involved.
    • A supportive regulatory environment can enhance technological development.
    • Finding consensus among diverse stakeholders can be challenging.
    • The UK aims to align with the EU AI Act while maintaining flexibility.
    • Professional accreditation in AI skills is essential for industry growth.


    Learn more by visiting:
    1. Referenced article: https://www.ainews.com/p/eu-gathers-experts-to-draft-ai-code-of-practice-for-general-ai-models

    2. EU AI Act 2024/1689: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

    3. UK Automated Vehicles Act 2024: https://www.legislation.gov.uk/ukpga/2024/10/contents/enacted

    4. Richard's Queen Mary University of London profile: https://www.qmul.ac.uk/sef/staff/richardsaldanha.html

    5. Richard's Academic Speakers Bureau profile: https://www.academicspeakersbureau.com/speakers/richard-saldanha

    6. The UK Institute of Science and Technology (IST) website: https://istonline.org.uk/

    7. IST AI professional accreditation:
    https://istonline.org.uk/professional-registration/registered-artificial-intelligence-practitioners/

    8. IST AI training: https://istonline.org.uk/ist-artificial-intelligence-training/

    Dr. Richard Saldanha is one of the founder members of the Institute of Science and Technology's Artificial Intelligence Special Interest Group in the UK. He is actively involved in the development of the Institute's AI professional accreditation as well as host of its online AI Seminar Series. Richard is a Visiting Lecturer at Queen Mary University of London where he teaches Machine Learning in Finance on the Master’s Degree Programme in the School of Economics and Finance. He is also an Industrial Collaborator in the AI for Control Problems Project at The Alan Turing Institute. Richard's earlier career was in quantitative finance (risk, trading and investments) gaining over two decades of experience working for institutions in the City of London. He is still actively engaged in quantitative finance via Oxquant, a consulting firm he co-heads with Dr Drago Indjic. Richard attended Oriel College, University of Oxford, and holds a doctorate (DPhil) in graph theory and multivariate analysis. He is a Fellow and Chartered Statistician of the Royal Statistical Society; a Science Council Chartered Scientist; a Fellow and Advanced Practitioner in Artificial Intelligence of the Institute of Science and Technology; a Member of the Institution of Engineering and Technology; and has recently joined the Responsibl

    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    21 mins
  • The Intersection of AI and Healthcare with Dr. Jolley-Paige and Caraline Bruzinski, mpathic | EP 02
    Oct 24 2024

    In this episode of the Responsible AI Report, Patrick speaks with Caroline Brzezinski and Dr. Amber Jolley-Paige from mpathic about the intersection of AI and healthcare. They discuss the importance of measuring AI accuracy, the need for standardized testing, acceptable error rates in medical AI, and current trends in AI adoption within the healthcare sector. The conversation emphasizes the critical role of human oversight and expert involvement in ensuring the safety and efficacy of AI tools in medical applications.

    Takeaways

    • AI in healthcare requires domain-specific validation.
    • Human oversight is essential for AI accuracy.
    • Standardized testing for medical AI is crucial.
    • Acceptable error rates depend on potential harm.
    • Different healthcare sectors adopt AI at varying rates.
    • Generative AI is just one aspect of healthcare AI.
    • AI tools must be tailored to specific medical needs.
    • Experts should guide AI development and deployment.
    • The healthcare industry is still figuring out best practices.
    • AI advancements necessitate ongoing regulatory discussions.

    Learn more by visiting:
    https://mpathic.ai/
    https://www.linkedin.com/in/amber-jolley-paige-ph-d-72041b46/
    https://www.linkedin.com/in/caraline-7b22588b/

    Dr. Jolley is a licensed professional counselor, researcher, and educator with over a decade of experience in the mental health field. As the Vice President of Clinical Product and a founding team member at mpathic, she leads a team that utilizes an evidence-based labeling system to advance natural language processing technologies. Dr. Jolley leverages her extensive clinical, research, and teaching background to develop a conversation and insights engine, providing individuals and organizations with actionable insights for enhanced understanding.

    Caraline Bruzinski is a Senior Machine Learning Engineer at mpathic, where she models clinical trial data from therapist-client sessions with a focus on measuring empathy and therapist-patient conversational outcomes. Caraline specializes in refining models to achieve higher accuracy and reliability, developing custom ML models tailored to address specific clinical setting challenges, and conducting statistical analysis to enhance the accuracy and fairness of machine learning outcomes. With a Master’s degree in Computer Science, specifically focusing on AI/ML, from New York University and a background in data engineering, she brings extensive experience from her previous roles, including as Tech Lead at Glossier Inc. There, she developed a recommendation system that boosted sales by over $2M annually.

    The Responsible AI Report is produced by the Responsible AI Institute.

    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    18 mins

What listeners say about Responsible AI Report

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.