Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    HHS Is Utilizing AI Instruments From Palantir to Goal ‘DEI’ and ‘Gender Ideology’ in Grants

    Naveed AhmadBy Naveed Ahmad03/02/2026Updated:03/02/2026No Comments3 Mins Read
    HHS Using Palantirs AI to Target DEI Science 2256813198

    **The Creepy AI-Driven Grant Audits Exposing the Dark Side of HHS’s DEI Crackdown**

    I just can’t get my head around this latest disturbing news from the Department of Health and Human Services (HHS). Apparently, they’re using AI-powered tools from Palantir to review grant applications, grants, and job descriptions within the Administration for Children and Families (ACF). The goal? To weed out anything that doesn’t align with President Trump’s executive orders targeting “gender ideology” and diversity, equity, and inclusion (DEI). Yeah, it sounds like a nightmare.

    The report that broke this story revealed that Palantir was charged with compiling a list of job descriptions that need to be tweaked to comply with these orders. But here’s the weird part: none of the documentation mentions DEI or “gender ideology” – it’s like they’re trying to keep it under wraps.

    Palantir, the company behind the AI tools, has a sketchy history of working with government agencies like the CIA and NSA. In 2022, they scored over $35 million in funds and obligations from HHS alone. And if that wasn’t enough, another startup, Credal AI, co-founded by two Palantir alumni, is also in on the action. Their “Tech Enterprise Generative Synthetic Intelligence (GenAI) Platform” helps ACF review existing grants and new applications, using AI to evaluate software submission data and flag potential issues. All of it gets routed to the ACF Program Office for final review.

    So, what does this mean? Essentially, these AI tools are being used to target and silence marginalized voices that promote DEI and gender ideology. It’s a super problematic move that perpetuates harm and dangerous stereotypes. Imagine if AI was used to assess your research proposal or job application based on your gender identity or sexual orientation – it’s a chilling thought.

    The impact of these executive orders has already been felt far and wide. The National Science Foundation is flagging research that includes terms like “female,” “inclusion,” and “underrepresented,” while the CDC has started pulling back research that mentions “LGBT,” “transsexual,” or “nonbinary.” The Substance Abuse and Mental Health Services Administration even went so far as to remove an LGBTQ youth service line provided by the 988 Suicide & Crisis Lifeline. These are just a few examples of the devastating effect these orders have had on marginalized communities.

    As we navigate this toxic political climate, we need to stay vigilant and speak out against these discriminatory practices. We need transparency and accountability from our government agencies and a voice in the conversation.

    **Original article:** [https://www.wired.com/story/hhs-is-using-ai-tools-from-palantir-to-target-dei-and-gender-ideology-in-grants/](https://www.wired.com/story/hhs-is-using-ai-tools-from-palantir-to-target-dei-and-gender-ideology-in-grants/)

    Naveed Ahmad

    Related Posts

    OpenAI COO says adverts will likely be ‘an iterative course of’

    25/02/2026

    Apply to take the stage at Founder Summit 2026

    25/02/2026

    People now take heed to podcasts extra usually than discuss radio, research exhibits

    25/02/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.