We are launching an AI for Individual Rights program at HRF
Excited to see how we can apply learnings from working with Bitcoin and open source tools to this field
Details and application link for the new director position below 👇
---
The Human Rights Foundation is
embarking on a multi-year plan to
create a pioneering AI for Individual
Rights program to help steer the
world’s AI industry and tools away
from repression, censorship, and
surveillance, and towards individual
freedom.
HRF is now seeking a Director of AI
for Individual Rights to lead this work.
Apply today with a cover letter
describing why you are a good fit for
this role, as well as a resume and
names of three individuals you would
suggest as references.
This initiative comes at a moment
where AI tools made by the Chinese
Communist Party are some of the
best in the world, and are displacing
tools made by corporations and
associations inside liberal
democracies. This also comes at a
moment where open-source AI tools have never been more powerful, and the opportunities to use AI tools to strengthen and expand the work that dissidents do inside authoritarian regimes have never had more potential. When citizens are holding their governments accountable, they should use the most advanced technology possible.
There are many “AI ethics” working groups, associations, non-profits, industry papers, and centers already extant, but zero have a focus on authoritarian regimes. Many are bought off by the Chinese government, and refuse to criticize the Chinese government’s role in using AI for repression in the Uyghur region, in Tibet, in Hong Kong, and elsewhere. Others are influenced by the Saudi or Russian governments and hold their tongue on too many issues. Others still are very close to the US government and must mind a different set of political alliances.
HRF will establish the first fully-sovereign program, liberated to monitor and expose AI being used by autocrats as a tool of repression and also support open-source AI tools in the hands of dissidents, especially those laboring under tyranny.
Critically, this program will not be oriented towards preventing “superintelligence” risk or concerned with an AGI becoming catastrophically powerful. While those might be worthy efforts, this program will be entirely focused on tracking and challenging how authoritarian regimes are using AI and helping spark the proliferation of open-source tools that can empower and liberate individuals.