top of page

United Nations Sets a Global Baseline for Neurotechnology Ethics

At UNESCO’s General Conference, United Nations member states adopted the first global standard for neurotechnology ethics, aiming to protect mental privacy and freedom of thought as brain- and nerve-interface tools move into everyday products. The Recommendation takes effect on 12 November 2025 and draws clear lines, including a prohibition on marketing that targets people during sleep and dream states, plus tighter guardrails for deployments in classrooms and workplaces.


The timing reflects the recent acceleration in innovation. AI is expanding what can be inferred from neural signals and adjacent sensors, while consumer neurotech is spreading into earbuds, glasses, wristbands, and headbands. The UNESCO text formalizes “neural data” and calls for it to be treated as sensitive, with the Recommendation functioning as soft law that governments and industry can translate into rules and procurement checks.


UNESCO’s own framing emphasizes “enshrining the inviolability of the human mind,” a signal that mental-privacy boundaries are moving beyond debates, becoming core design criteria.


Inside the United Nations Neurotech Recommendation

UNESCO’s General Conference adopted a Recommendation, the UN system’s soft-law instrument, setting the first global normative framework for neurotechnology ethics. It closes a multi-year process: expert drafting through 2024, a final Secretariat report in April 2025, and an intergovernmental meeting of Member State experts held 12–16 May 2025 to agree the text before November adoption. The Recommendation enters into force on 12 November 2025.


The document defines neurotechnology as devices, systems, and procedures, hardware and software, that “measure, access, monitor, analyse, predict or modulate the nervous system,” across medical and non-medical contexts, including open- and closed-loop systems. It then elevates neural data to the status of sensitive personal data, calling for prior, free and informed opt-in consent, strict purpose limitation, and safeguards against conditioning access to services on disclosure.


Recommendation United Nations Neuro
The first page of the Recommendation.

Several key controls are likely to shape future product roadmaps. Multifunction devices such as XR glasses or smart earbuds must include hardware-based controls that let users disable neuro-features while preserving basic functions. The text also addresses manipulation risks in recommender systems, priming and nudging, and closed-loop/immersive environments, and it prohibits marketing during sleep and dream states, all applying to neural data as well as indirect and non-neural signals used to infer mental states.


On the deployment of neurotechnology in institutions, the Recommendation sets strict guardrails. In workplaces, use must be strictly voluntary, consent alone is not sufficient as the sole legal basis for intrusive processing, and neurotechnology should not be used for performance evaluation or punitive measures. Policies also require minimization, time/place limits, and automatic disabling outside working hours when multifunction devices are issued.


In education, the Recommendation bars non-therapeutic performance enhancement for healthy children, requires evidence-based use with consent/assent and independent oversight, and prohibits using neurotechnology for student or educator performance evaluation.


Real-World Implications of the Recommendation

The Recommendation will have clear implications for the neurotechnology industry, with a especially largely focus on data classification. Any feature that can reveal mental state, including peripheral technology like EMG or contextual signals, should be handled as sensitive personal data by default. That directs teams toward minimization, on-device processing where possible, and privacy-by-design; the Recommendation explicitly mentions edge processing and storage as preferred safeguards.


Evidence and oversight expectations will also rise. The Recommendation calls for independent, pluralist ethics review of research protocols (medical and non-medical), registration of clinical trials and strengthened device-adverse-event reporting, plus AI validation that tests for bias with human oversight and transparency on data provenance.


For consumer neurotechnology, the Recommendation adds that non-medical claims should rest on robust scientific evidence, and any product veering into diagnosis, prevention or treatment must pass safety and efficacy testing under appropriate supervision. These signals aim to translate into stronger validation packages and documentation behind both product claims and enterprise proposals.    


Global Neurotech Regulation

In the United States, a patchwork of regulation is taking shape that the United Nations move is likely to accelerate. Colorado and California were first to classify “neural data” as sensitive under state privacy law (2024), and Montana followed in. Several other states have active or proposed measures, signaling continued state-level momentum into 2026. At the federal level, the MIND Act (introduced September 29, 2025) would direct the FTC to study neural-data governance and recommend a national framework.


The FTC
The Federal Trade Commission might soon govern Americans' neural data.

Across the Atlantic, the EU AI Act creates adjacent constraints relevant to neurotech, most notably the early-applying prohibitions (from February 2025) on certain manipulative systems and restrictions around emotion inference, with broader obligations phasing and high-risk, product-embedded AI regulation coming in the next two years. This timeline means procurement and CE-marking strategies for neuro-enabled products will increasingly intersect with AI-Act duties, while UNESCO’s stance might spur more explicit neuro-specific regulation soon.


Latin America offers the clearest example of “hard law” neuro-rights so far. Chile amended its Constitution in 2021 to protect mental integrity and brain-derived information, and its courts have begun to test those principles, most visibly in a 2023 Supreme Court ruling ordering the deletion of EEG-based data collected without proper consent.


Taken together, the United Nations Recommendation, U.S. state and federal activity, and the EU AI Act point to a converging baseline: treat neural and neural-adjacent inferences as sensitive; avoid manipulative uses; build in hard off-switches and precise consent; and back claims with evidence. The Recommendation, in that sense, goes beyond a recommendation, but can and should be interpreted as an industry imperative: innovation can only scale when the trust architecture is in place.

Subscribe to the Neurofounders Monthly Digest

Get the latest founder stories, brain tech trends, and insights before anyone else.

bottom of page