Skip to main content

Anticipating the Mind-Machine: Governance Innovation for Frontier Technologies

Page 1


Policy Brief No. 231 — March 2026

Anticipating the MindMachine: Governance Innovation for Frontier Technologies

Key Points

→ Many jurisdictions around the world have been scrambling to address complex questions about generative artificial intelligence (AI) and intellectual property (IP) law. Authorship has been at the centre of the generative AI copyright debate, as some generated outputs are becoming almost indistinguishable from human-authored works.

→ This debate will soon expand to include additional subsets of frontier technologies such as braincomputer interfaces (BCIs), and different sets of regulatory frameworks.

→ Anticipatory governance, using strategic intelligence, can assist policy makers to develop a forward-looking proactive governance structure and process. This strategy does not mean rapid or over-regulation but instead calls for a systemslevel evolution in the way jurisdictions approach governance for frontier technologies as they emerge and converge.

Statement of the Issue

The field of BCIs is emerging and gaining traction in both therapeutic and non-therapeutic applications (Policy Horizons Canada 2025, 61–63) and has recently made headlines with the reported interest of a certain generative AI giant (Bort 2025). While not commercially scaled, numerous examples of BCIs are emerging that allow users to create works with their brain activity (Pinegger et al. 2017; AAAS Art of Science and Technology Program 2023; Vanutelli, Salvadore and Lucchiari 2023). Coupled with AI, these applications have produced creative outputs such as music and works of visual art that push the boundaries of machine-assisted human authorship (Aguero 2024; AAAS Art of Science and Technology Program 2023; Pinegger et al. 2017; Vanutelli, Salvadore and Lucchiari 2023).

There are various categories of BCI applications and methods of composition that can effectuate an artistic output. Some of these applications are active and others are passive. It is ultimately the level of human authorship translated through

About the Author

B. Courtney Doagoo is a CIGI fellow and a principal consultant, strategist and interdisciplinary researcher specializing in technology law and governance, intellectual property, and public policy. Her work focuses on the ethical, legal and societal implications of frontier technologies. She works with universities, international organizations and the public sector, providing research, strategy, operations, program and management consulting advisory services. Courtney’s post-doctoral fellowship at CIGI was focused on research in technology policy and law. Courtney completed her Ph.D. in law at the University of Ottawa.

are deployed in the market. This brief does not promote substantive amendments to copyright law; rather, its intention is to provide an illustrative example of how policy makers can proactively explore new technologies and understand and anticipate potential implications, including those related to IP, privacy and consumer protection.

Summary of the Relevant Facts

BCIs: A Simple Overview

BCIs that will determine whether copyright can subsist. This is a particularly interesting topic, as the questions of copyright and the threshold of human authorship have taken centre stage in debates about generative AI technologies (United States Copyright Office 2025; Lucchi 2025, 38–39). In some jurisdictions, the prevailing consensus is that human authorship (and what this entails) is at the core of copyright and its raison d’être, while in others, an additional category of protection exists for computer-generated works.1

This policy brief aims to demonstrate that emerging applications of frontier technologies will continue to challenge conventional legal and policy frameworks, including in the area of IP. While exploring the intersection of BCIs and copyright is not new (Bruton 2014; Ramirez Caminatti 2023; Baker 2020), what is new is the current dialogue around human versus machine authorship thresholds brought on by the use of generative AI. This brief suggests that adopting a dynamic and forward-looking approach for frontier technology governance can help identify interdependencies, and better enhance understanding of these technologies’ legal, social and ethical impacts and challenges before they

1 For example, Copyright, Designs and Patents Act, 2000 (UK), c 48, s 178, online: <www.legislation.gov.uk/ukpga/1988/48/contents>. See also Chesterman (2025, 27–29).

There are a few categories of emerging BCI technologies. For simplicity, this section will provide a high-level overview of the main binary categories of BCIs to demonstrate how they work and interact with users. The physical component of the BCI can be categorized into invasive or noninvasive devices. The application — how the BCI interacts with an individual’s brain waves — can be categorized into active and passive applications (Zander et al. 2010; Alimardani and Hiraki 2020).

Invasive BCIs are typically surgically implanted on the scalp or in the brain, carrying more precise signals (Caiado and Ukolov 2025, 1). Current examples include those that are being used in medical trials, for therapeutic or medical applications, for example, products from Neuralink, Blackrock Neurotech and Synchron. Non-invasive BCIs are worn or applied, not implanted in the body, and rely on external sensors to read brain signals (ibid.). These typically take the form of headbands or other external sensors applied to but not implanted in the head.

This aspect of BCIs, that is, whether they are invasive or non-invasive, is less relevant and would likely not impact whether a work could qualify for copyright protection. Rather, the question hinges on the interaction of BCIs with the individual to create the works, that is, whether the human users are actively or passively engaged with the BCIs. In this case, there are several different categories that explain the interaction between humans and BCIs. On the one hand, active BCIs typically require an individual to deliberately control certain functions or actions or make certain choices explicitly, that is, they are “consciously controlled by the user, independently from

external events, for controlling and application” (Zander et al. 2010, 185; Alimardani and Hiraki 2020). These choices can be direct or selective but require some mental exertion or influence by the user (Wadeson, Nijholt and Nam 2015, 71).

On the other hand, passive BCIs monitor signals and activity that occur without the individual’s deliberate or intentional control over the inputs or outputs using an interface, that is, “arbitrary brain activity without the purpose of voluntary control, for enriching a human-computer interaction with implicit information” (Zander et al. 2010, 185; Alimardani and Hiraki 2020). In passive applications, there is a heavier reliance on pre-existing materials that are then channelled via the brain signals. In both passive and active use cases, the signals that are emitted from the brain are transmitted, captured and translated into a tangible output; however, in the latter cases, these activities occur latently, meaning that they occur without the individual’s intention to create or control the inputs or outputs.

These distinctions are important because there are examples of artistic creation that fall under a combination of these categories (Prpa and Pasquier 2019; Vanutelli, Salvadore and Lucchiari 2023). Artistic BCIs have been organized into three categories based on process or composition methods: “audification (visualisation),” “musification/animation” and “instrument control” (Gürkök and Nijholt 2013, 828; see also Schreiner et al. 2025). For example, audification occurs when the brain signals are mapped into audio or visual signals and the output is a “direct representation of brain activity” — it is an interpretation of brain signals into music or visual works (Gürkök and Nijholt 2013, 828). Musification or animation occurs when neural activity controls artistic tools to create new works. Finally, instrument control is the scenario where brain signals are directly used to command physical or virtual instruments to create works (ibid.; Schreiner et al. 2025, 2). The final category is an active application, that is, where brain signals are used to control instruments to create works (Gürkök and Nijholt 2013, 828). Illustrative examples include works exhibited at the American Association for the Advancement of Science in Washington, DC, in 2023, where patients with paralysis used invasive or implantable BCIs that required active engagement to create art using programs (AAAS Art of Science and Technology Program 2023; Blackrock Neurotech 2023. Similarly,

in 2013, three musicians with paralysis developed music applications using a non-invasive, active method, for a marketing campaign.2

As we begin to enter the era of non-invasive commercial BCIs in the not-so-distant future, there will be more uses of BCIs that may challenge the existing applications and ways we can create outputs. Imagine BCIs that can stimulate lucid dreaming and even allow one to control some aspect of their dreams.3 Could this level of control be considered as enough to demonstrate sufficient skill and judgment? What about passive thoughts, emotions or daydreams as visual outputs? There are numerous devices currently being developed that will begin to challenge the boundaries of creation.

Copyright Law

National copyright laws are harmonized and heavily influenced by international conventions and treaties and multilateral agreements in an effort to harmonize trade relations and reduce friction.4 The World Intellectual Property Organization supports member states to convene and build consensus for new treaties, rules and frameworks. It is therefore difficult to stray from the general agreed-upon obligations (Ginsburg 2000). Despite harmonization efforts, interpretation and application of these obligations can vary at a national level.

For copyright protection to subsist in a work in Canada, there is a requirement for originality, fixation and expression (not idea), and that the work fit within one of the categories set out in the Copyright Act.5 Originality requires skill and judgment, signalling an active expression of authorship. In clarifying these requirements, the Supreme Court of Canada held that skill required “the use of one’s knowledge, developed aptitude or practised ability in producing the work” and judgement requires “the use of one’s capacity for discernment or ability to form an opinion or

2 See www.digitalgolem.com/portfolio/mindtunes/.

3 See Jarrett (2024); see also www.prophetic.com/.

4 See, for example, Berne Convention for the Protection of Literary and Artistic Works, completed at Paris on May 4, 1896, as amended on September 28, 1979, Canada TS 1998/18, online: <https://publications.gc.ca/pub?id=9.824867&sl=0>; Agreement on Trade-Related Aspects of Intellectual Property Rights, 15 April 1994, Marrakesh Agreement Establishing the World Trade Organization, Annex 1C, 1869 UNTS 3; 33 ILM 1197 (1994), online: <www.wto.org/ english/docs_e/legal_e/27-trips.pdf>. See also Cottier (2021).

5 Copyright Act, RSC 1985 c C-42 [Copyright Act].

evaluation by comparing different possible options in producing the work,” concluding that “this exercise of skill and judgment will necessarily involve intellectual effort. The exercise of skill and judgment required to produce the work must not be so trivial that it could be characterized as a purely mechanical exercise.”6 Fixation “in some material form” is required, although this is not explicitly written in the Copyright Act for all works.7 Finally, the work must be an expression and not just an idea, meaning that there should be a level of expressive intent; as stated by the Supreme Court of Canada, “copyright law protects the expression of ideas in these works; it does not protect ideas in and of themselves.”8

BCIs and Copyright Law in Canada

In applying Canada’s copyright framework at a very high level, fixation will not be an issue for active BCIs, because this will likely be satisfied when the brain signal communicates a command or an output and is recorded digitally or otherwise. Similarly, the expression of the idea requirement will likely be met in most active applications in cases where the BCI is able to translate details and direction — and not just high-level prompts — into artistic expression. Originality would require skill and judgment of the author, which is demonstrated in active BCIs where there is deliberate and conscious intervention or manipulation of an instrument to create a work. In these cases, the BCI is considered as a tool, and the output is shaped directly by an individual’s brain signal that controls the output.

Passive BCI would not meet the threshold requirements, particularly if the thoughts are arbitrarily and passively being recorded so that skill and judgment or expressive intent is not adequately satisfied. Examples of passive BCIs may arise in works of performance or participatory art, where the audience’s passive brainwaves or emotional states can influence instruments or lighting, producing artistic output (Schreiner et al. 2025), or where brainwaves and body signals can be captured via an interface that can

“express…emotions and the creative moment, the creative spark, in a completely new way” (as Ars Electronica Futurelab is exploring9). Where is the line drawn if individuals incorporate these passive outputs as inputs into a different work?

This overly simplistic overview omits the fact that technologies are quickly beginning to converge, as with, for example, generative AI or machine learning’s developing ability to decode thoughts and images (Nahas 2023; Conroy 2025). How will policy makers tackle the convergence of technologies of the not-sodistant future? Some examples of research and potential applications in these areas include:

→ AI interpreting “mental handwriting” (Goldman 2021);

→ BCIs capable of decoding “inner speech” (Goldman 2025);

→ BCIs and augmented reality/virtual reality technologies (Liu et al. 2024);

→ quantum technologies integrated with BCIs to allow multiple users to engage in collaborative ideation and creation (Neuroba 2025); and

→ the applicability of BCI in metaverse applications (Bernal et al. 2025).

As technology continues to evolve, grey zones that blur the line between human and machine authorship will emerge — areas that policy makers may have to begin considering now, rather than when these applications are commercially available to the public. For example, how do economic or moral rights fare in the context of collaborative BCI ideation and creation? Should thoughts, emotions or ideas visualized or expressed by intermediary technologies, including generative AI, receive protection? In some cases, depending on the composition and output, it may be difficult to discern copyrightable expression. These are interesting questions as BCIs and generative AI devices become embedded in certain aspects of our lives, challenging questions about authorship and human creativity.

6 CCH Canadian Ltd v Law Society of Upper Canada, 2004 SCC 13 [CCH v LSUC], para 16.

7 Copyright Act, supra note 5; Canadian Admiral Corp v Rediffusion Inc (1954), 20 CPR 75.

8 CCH v LSUC, supra note 6 at para 8.

9 See https://ars.electronica.art/futurelab/en/projects-future-ink/; https://ars.electronica.art/futurelab/en/projects-life-ink/.

Is This Different from Generative AI Analysis?

Until now, the human-machine authorship debate (generative AI and copyright) has focused on a very specific set of generative AI systems trained on massive amounts of preexisting cultural and scientific works (often without rightsholders’ permission). The enduser prompts some instructions, and the program “synthesizes” new expressions from the existing underlying works (Lucchi 2025, 13).

In many cases, outside of providing prompts, there may be little human input, involvement, control, authorship or authorial expression attributed to generative AI outputs. Users may choose to edit and change these direct outputs, but ultimately, given that some AI generative works are almost indistinguishable from humanauthored works, they may superficially pass for works that could receive copyright protection even if the generative AI user has not intervened. However, as Carys J. Craig (2022, 22) reminds us, “whether or not the AI is generating outputs that are facially equivalent to human-authored works, the AI lacks the intentionality, creative agency, and understanding necessary to engage in authorship as a relational and communicative act.”

Unlike generative AI, which currently relies heavily on pre-existing content that is synthesized, active BCIs can translate brain activity into original images, words and music. In many active BCI applications, there is decidedly a different level of human involvement, control and authorship than with generative AI. Individuals using active applications are deliberate about their engagement with the output. With passive BCIs it is less clear, because skill and judgment would not be present; however, given the right level of human intervention — such as the editing of generative AI outputs — these works may also, at some point, qualify.

The nexus of copyright and generative AI has been top of mind for policy makers around the world. The United States Copyright Office produced a report addressing the copyrightability of generative AI–assisted works, highlighting the crux of human control of expression and creativity as central factors. The guidance concluded that “in many circumstances these outputs will be copyrightable in whole or in part — where AI is used as a tool, and where a human has been able to determine the expressive elements they contain. Prompts

alone, however, at this stage are unlikely to satisfy those requirements” (United States Copyright Office 2025, 41). In 2025, the European Parliament’s JURI Committee commissioned a comprehensive study titled Generative AI and Copyright: Training, Creation, Regulation. In this study, it was determined that the “core concepts such as authorship, originality, and the expression of ideas all presuppose a human agent making creative choices. This means that without a human author, there is no ‘expression’ in the legal sense — no transformation of ideas into protectable form” (Lucchi 2025, 93). Both documents underlined the human intention and meaningful human input or expressive element required to satisfy the authorship requirement.

What Is Canada’s Take?

On the topic of AI, Canada was the first to unveil its AI strategy in 2017, focusing on building a strong Canadian AI ecosystem. However, Canada’s approach to AI governance has been described as “highly fragmentary” and not a nationally unified one (Attard-Frost, Brandusescu and Lyons 2024, 11). Despite renewed efforts to engage in an AI strategy, some have expressed concerns that recent activities may fall short of a comprehensive approach (Lau 2025). On September 26, 2025, the new minister of AI and digital innovation launched an AI Strategy Task Force and a 30-day public consultation.10 This followed Bill C-27, introduced in 2022, with the aim of reforming private sector data protection laws (Consumer Privacy Protection Act), the creation of a tribunal to support this (Personal Information and Data Protection Tribunal Act), governance of general and high-impact AI systems, and the use of data within AI systems (Artificial Intelligence and Data Act) (AIDA).11 AIDA, rooted in a risk-based approach to anticipate and address harms pre-emptively, was met with widespread criticism and lacked adequate stakeholder consultation or engagement (Scassa 2022, 2023; Clement 2023). Although there are no plans to revive AIDA after it died when Parliament was prorogued in January 2025, it remains unclear how the government intends to re-engage on regulating AI systems (McLauchlan 2025).

10 See Innovation, Science and Economic Development Canada (ISED) (2025b); see also https://ised-isde.canada.ca/site/ised/en/publicconsultations/help-define-next-chapter-canadas-ai-leadership.

11 Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2022.

In relation to copyright, in 2023 the government held a public consultation on generative AI and copyright.12 The consultation paper stated that the government was considering whether the Copyright Act was “suited to address questions of authorship and ownership of AI-generated works” and that, even if it were suited, additional clarity would be helpful “to create more certainty in the marketplace” (ISED 2023, 12). The consultation period closed on January 15, 2024, and the “what we heard” consultation report was released a year later (ISED 2025a); as of December 2025, additional clarification had not been issued.

Given the government’s current approach to various aspects of AI governance, it seems unlikely that the copyright implications of frontier technologies such as BCIs will be on its agenda in the near future. While IP is only one consideration — and a decidedly contentious one, given international norms and harmonization — there are other more pressing governance and policy considerations, such as human rights, privacy13 and so forth. The current principles-based approach provides flexibility in governance; however, providing clarity in the form of forward-looking guidance, multidisciplinary research and non-binding points of view could go a long way in supporting certainty and understanding the interdependencies.

Options for Consideration

On the one hand, given the speed with which technology advances and the concerns about stifling innovation, the desire and pressure to be cautious regarding regulation is understandable. On the other hand, inaction or passivity can contribute to an environment of uncertainty that may itself impact the development and adoption of frontier technologies. BCIs are merely one example illustrating the challenges that policy makers may confront as technologies converge, as they have

12 See https://ised-isde.canada.ca/site/strategic-policy-sector/en/ marketplace-framework-policy/consultation-copyright-age-generativeartificial-intelligence.

13 Interestingly, although not directly related to copyright law, on February 10, 2026, the Office of the Privacy Commissioner of Canada issued an Interpretation Bulletin adding neural data to the list of sensitive personal information requiring a higher degree of protection — signalling that some aspects of neurotechnology are actively on the radar. See Office of the Privacy Commissioner of Canada (2026).

triggered a host of questions related to ethics, privacy, human rights and consumer protection — not just copyright and the possibility of protecting outputs (Alegre and Shull 2024; Berrick 2022).

Supporting the level of innovation that Canada wants to encourage requires a shift from its current approach to one that is active and open to innovation, experimentation and iteration. In this context, the “shift” does not mean to imply that additional or rapid regulation is required to promote innovation. In fact, it is the opposite: the value proposition here is merely to openly and collaboratively, with a broad group of stakeholders, examine the boundaries of existing regulatory ecosystems and implications as technologies are being developed — in anticipation of them entering the marketplace.

Planned carefully — unlike the experience with AIDA (Scassa 2023; Clement 2023) — agile and anticipatory governance14 could offer a receptive ecosystem for innovation and technology to flourish, while creating space that places participatory collaboration and design with key stakeholders at the centre of the process. Agile governance can aim to “reduc[e] regulatory uncertainties, enabl[e] experimentation, and [foster] collaboration between policymakers and innovators” (Winickoff and King 2025, paragraph 3). In the context of this policy brief, the author would like to underscore that it is not only innovators who should be a part of this process, but also all the stakeholders of the entire innovation ecosystem, which include technology creators, individuals who use the technologies and individuals who are impacted by them. Importantly, this ecosystem includes members of academia and civil society.

Policy makers should be encouraged to engage in multidisciplinary and strategic intelligence tools such as horizon scanning, foresight, and technology impact assessments to handle these complex questions (Organisation for Economic Co-operation and Development [OECD] 2024). In addition to these strategic intelligence tools, policy makers could consider creating multidisciplinary regulatory sandboxes and experiment with model laws, guidance and tools that incorporate participation and engagement from expert researchers and the private sector. These activities could include

14 See Winickoff and King (2025); see also www.oecd.org/en/topics/ anticipatory-governance.html.

participants from across the government, including regulators in the areas of, for example, consumer protection and privacy, and policy makers across different departments. There is a need for participatory engagement not only to fully understand technologies and their capabilities, but also to flag the ethical and social risks they pose — as they are being designed, prior to being deployed (Winickoff and King 2025; OECD 2024).

Recommendations and Conclusion

The rapid advance of BCI technologies and the potential they have to disrupt conventional copyright paradigms suggest that policy makers will need to tackle questions that are far-reaching with more nuance, foresight and agility. Canada will require institutional and procedural shifts that align departmental goals to embrace uncertainty and experimentation and encourage participatory engagement with a broad spectrum of stakeholders in order to create the structure and framework necessary to anticipate regulating frontier technologies.

In 2024, the OECD published its Framework for Anticipatory Governance of Emerging Technologies, which outlines five interdependent elements and tools with which policy makers can address the challenges related to emerging technology (OECD 2024, 11–12).

Specifically, the OECD notes that its framework “places anticipation at the centre of emerging technology governance. The common drivers [noted in the OECD report] — and experiences with, for example, AI, neurotechnology and genetic engineering — point to the need to take on new kinds of forward looking approaches to emerging technology governance. What might be called, ‘Anticipatory technology governance’ encourages a shift in how we imagine the challenge of governance from the management of technological risks to ‘getting ahead’ of technology developments (Guston, 2013)” (ibid., 10).

One of many manifestations could resemble a centralized frontier technology policy observatory or multidisciplinary group, leveraging capacities that include Policy Horizons, Canada’s centre for

excellence in foresight15 — which, as a sidebar, flagged the risks related to AI and neurotechnology in its 2025 foresight report on AI (Policy Horizons Canada 2025, 61–63) — and members from the federal departments, in addition to private sector and academic experts in technology, ethics, law, humanities, engineering and other areas, to work together in an experimental approach.

In all, Canada has the knowledge, capacity and resources to make this work — all it needs is the foresight and the will to implement systems-level alignment and change.

Works Cited

AAAS Art of Science and Technology Program. 2023. “Creating Art with Thought Alone: First BCI-Generated Art Exhibit Opens in Washington, D.C.” News, April 5. Washington, DC: American Association for the Advancement of Science. www.aaas.org/news/ creating-art-thought-alone-first-bci-generatedart-exhibit-opens-washington-dc.

Aguero, Dora. 2024. “From Mind to Image: Obvious’s Breakthrough in AI Art.” The Americas Collection, May 29. https://americascollection.com/news/ from-mind-to-image-obviouss-breakthrough-inai-art/?srsltid=AfmBOopXVlH9NFHTaK76m5Lm BI_9YaC-K7E6EgVGhf9Zh8ix3MuVU119.

Alegre, Susie and Aaron Shull. 2024. Freedom of Thought: Reviving and Protecting a Forgotten Human Right. Special Report, September 4. Waterloo, ON: CIGI. www.cigionline.org/publications/ freedom-of-thought-reviving-and-protectinga-forgotten-human-right/.

Alimardani, Maryam and Kazuo Hiraki. 2020. “Passive Brain-Computer Interface for Enhanced Human-Robot Interaction.” Frontiers in Robotics and AI 7: Article 125. https://doi.org/10.3389/frobt.2020.00125.

Attard-Frost, Blair, Ana Brandusescu and Kelly Lyons. 2024. “The governance of artificial intelligence in Canada: Findings and opportunities from a review of 84 AI governance initiatives.” Government Information Quarterly 41 (2): 101929. https://doi.org/ 10.1016/j.giq.2024.101929.

Baker, Jonathan. 2020. “The Advent of Effortless Expression: An Examination of the Copyrightability of BCI-Encoded Brain Signals.” Minnesota Law Review 105: 389–427. https://minnesotalawreview.org/wp-content/ uploads/2020/12/Baker_MLR-1.pdf.

15 See https://horizons.service.canada.ca/en/home/index.shtml.

Bernal, Sergio López, Mario Quiles Pérez, Enrique Tomás Martínez Beltrán, Gregorio Martínez Pérez and Alberto Huertas Celdrán. 2025. “When Brain–Computer Interfaces meet the metaverse: Landscape, demonstrator, trends, challenges, and concerns.” Neurocomputing 625: 129537. https://doi.org/10.1016/j.neucom.2025.129537.

Berrick, Daniel. 2022. “BCI Commercial and Government Use: Gaming, Education, Employment, and More.” Future of Privacy Forum (blog), February 8. https://fpf.org/blog/ bci-commercial-and-government-use-gamingeducation-employment-and-more/.

Blackrock Neurotech. 2023. “A mind controlled masterpiece: James Johnson creates art in Photoshop with BCI.” April 5. YouTube video, 4:07. www.youtube.com/watch?v=eW0jn7jAe1w.

Bort, Julie. 2025. “Sam Altman, OpenAI will reportedly back a startup that takes on Musk’s Neuralink.” TechCrunch, August 12. https://techcrunch.com/2025/08/12/sam-altman-openaiwill-reportedly-back-a-startup-that-takes-on-musks-neuralink/.

Bruton, Theo Austin. 2014. “Mind-Movies: Original Authorship as Applied to Works From ‘Mind-Reading’ Neurotechnology.” Chicago-Kent Journal of Intellectual Property 14 (1): 263–86. https://scholarship.kentlaw.iit.edu/ckjip/vol14/iss1/9/.

Caiado, Frederico and Arkadiy Ukolov. 2025. “The history, current state and future possibilities of the non-invasive brain computer interfaces.” Medicine in Novel Technology and Devices 25: 100353. https://doi.org/10.1016/j.medntd.2025.100353.

Chesterman, Simon. 2025. “Good models borrow, great models steal: intellectual property rights and generative AI.” Policy and Society 44 (1): 23–27. https://doi.org/10.1093/polsoc/puae006.

Clement, Andrew. 2023. “AIDA’s ‘Consultation Theatre’ Highlights Flaws in a So-called Agile Approach to AI Governance.” Opinion, Centre for International Governance Innovation, November 6. www.cigionline.org/articles/ aidas-consultation-theatre-highlights-flaws-in-a-socalled-agile-approach-to-ai-governance/.

Conroy, Gemma. 2025. “A mind-reading brain implant that comes with password protection.” Nature 644 (8078): 852–53. https://doi.org/10.1038/d41586-025-02589-5.

Cottier, Thomas. 2021. “World Intellectual Property Organization (WIPO).” In Max Planck Encycopedias of International Law. Heidelberg, Germany: Max Planck Institute for Comparative Public Law and International Law. https://opil.ouplaw. com/display/10.1093/law:epil/9780199231690/law9780199231690-e576?rskey=qzMZw1&result=1&prd=OPIL.

Craig, Carys J. 2022. “The AI-Copyright Challenge: TechNeutrality, Authorship, and the Public Interest.” Digital Commons All Papers, 360. Toronto, ON: Osgoode Hall Law School. https://digitalcommons.osgoode.yorku.ca/ cgi/viewcontent.cgi?article=1368&context=all_papers.

Ginsburg, Jane C. 2000. “International Copyright: From a ‘Bundle’ of National Copyright Laws to a Supranational Code?” Journal of the Copyright Society of the USA 47: 265–89. https://scholarship.law.columbia.edu/ faculty_scholarship/1212/.

Goldman, Bruce. 2021. “Software turns ‘mental handwriting’ into on-screen words, sentences.” Medical Research News, May 12. Stanford, CA: Stanford University School of Medicine. https://med.stanford.edu/news/all-news/2021/05/ software-turns-handwriting-thoughts-into-on-screen-text.html.

———. 2025. “Study of promising speech-enabling interface offers hope for restoring communication.” Neurology & Neurosurgery News, August 15. Stanford, CA: Stanford University School of Medicine. https://med.stanford.edu/ news/all-news/2025/08/brain-computer-interface.html.

Gürkök, Hayrettin and Anton Nijholt. 2013. “Affective Brain-Computer Interfaces for Arts.” In 2013 Humaine Association Conference on Affective Computing and Intelligence Interaction, 827–31. Geneva, Switzerland: Institute of Electrical and Electronics Engineers. https://ieeexplore.ieee.org/document/6681547.

ISED. 2023. Consultation on Copyright in the Age of Generative Artificial Intelligence. https://ised-isde.canada.ca/ site/strategic-policy-sector/sites/default/files/ documents/2023-12/2023-consultation-paper-en.pdf.

———. 2025a. What We Heard Report: Consultation on Copyright in the Age of Generative Artificial Intelligence. January 30. https://ised-isde.canada.ca/site/strategic-policy-sector/ en/marketplace-framework-policy/consultation-copyrightage-generative-artificial-intelligence-what-we-heard-report.

———. 2025b. “Government of Canada launches AI Strategy Task Force and public engagement on the development of the next AI strategy.” News release, September 26. www.canada.ca/en/innovation-science-economicdevelopment/news/2025/09/government-of-canadalaunches-ai-strategy-task-force-and-public-engagementon-the-development-of-the-next-ai-strategy.html.

Jarrett, Christian. 2024. “This headset will let you control your dreams, Inception-style. Here’s what to know before you lose track of reality.” BBC Science Focus, January 4. www.sciencefocus.com/future-technology/ halo-sleep-headband.

Lau, Yvonne. 2025. “‘Flawed and broken’: Critics warn Ottawa’s AI task force is too industry focused.” Financial Post, October 9. https://financialpost.com/technology/ scholars-criticize-ottawa-new-ai-task-force.

Liu, Yang, Ruibin Liu, Jinnian Ge and Yue Wang. 2024. “Advancements in brain-machine interfaces for application in the metaverse.” Frontiers in Neuroscience 18:1383319. https://doi.org/10.3389/fnins.2024.1383319.

Lucchi, Nicola. 2025. “Generative AI and Copyright: Training, Creation, Regulation.” Study requested by the European Parliament, JURI Committee, July. Brussels, Belgium: Policy Department for Justice, Civil Liberties and Institutional Affairs, European Parliament. www.europarl.europa.eu/ RegData/etudes/STUD/2025/774095/ IUST_STU(2025)774095_EN.pdf.

McLauchlan, Madison. 2025. “Evan Solomon teases new AI laws as experts warn Canada is behind international peers.” BetaKit, October 24. https://betakit.com/ evan-solomon-teases-new-ai-laws-as-expertswarn-canada-is-behind-international-peers/.

Nahas, Kamal. 2023. “AI re-creates what people see by reading their brain scans.” Science, March 7. www.science.org/content/article/ai-re-createswhat-people-see-reading-their-brain-scans.

Neuroba. 2025. “The Future of Brain-Computer Interfaces: AI and Quantum Tech Leading the Way.” Neuroba (blog), June 21. www.neuroba.com/post/the-future-of-brain-computerinterfaces-ai-and-quantum-tech-leading-the-way.

OECD. 2024. “Framework for Anticipatory Governance of Emerging Technologies.” OECD Science, Technology and Industry Policy Paper No. 165. Paris, France: OECD Publishing. https://doi.org/10.1787/0248ead5-en.

Office of the Privacy Commissioner of Canada. 2026. “Interpretation Bulletin: Sensitive Information.” February 10. www.priv.gc.ca/ en/privacy-topics/privacy-laws-in-canada/the-personalinformation-protection-and-electronic-documents-actpipeda/pipeda-compliance-help/pipeda-interpretationbulletins/interpretations_10_sensible/#application.

Pinegger, Andreas, Hannah Hiebel, Selina C. Wriessnegger and Gernot R. Müller-Putz. 2017. “Composing only by thought: Novel application of the P300 braincomputer interface.” PLoS ONE 12 (9): e0181584. https://doi.org/10.1371/journal.pone.0181584.

Policy Horizons Canada. 2025. Foresight on AI: Policy considerations. Report, February 11. Ottawa, ON: Government of Canada. https://horizons.service.canada.ca/en/2025/ 02/10/ai-policy-consideration/pdf/ foresight_on_ai_policy_considerations.pdf.

Prpa, Mirjana and Philippe Pasquier. 2019. “Brain-Computer Interfaces in Contemporary Art: A State of the Art and Taxonomy.” In Brain Art, edited by Anton Nijholt, 65–115. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-14323-7_3.

Ramirez Caminatti, Favio. 2023. “Copyrighting Brain Computer Interface: Where Neuroengineering Meets Intellectual Property Law.” Cybaris 14 (1): Article 1. https://open.mitchellhamline.edu/cybaris/vol14/iss1/1.

Scassa, Teresa. 2022. “Regulating AI in Canada — The Federal Government and the AIDA.” Teresa Scassa (blog), October 11. www.teresascassa.ca/index.php?option=com_ k2&view=item&id=366:regulating-ai-in-canada-the-federalgovernment-and-the-aida&Itemid=80#:~:text=In%20 June%202022%2C%20the%20federal,apply%20 after%20harm%20has%20occurred).

———. 2023. “Regulating AI in Canada: A Critical Look at the Proposed Artificial Intelligence Data Act.” The Canadian Bar Review 101 (1): 1–30. https://cbr.cba.org/index.php/ cbr/article/view/4817/4539.

Schreiner, Leonhard, Anouk Wipprecht, Ali Olyanasab, Sebastian Sieghartsleitner, Harald Pretl and Christoph Guger. 2025. “Brain–computer-interface-driven artistic expression: real-time cognitive visualization in the pangolin scales animatronic dress and screen dress.” Frontiers in Human Neuroscience 19:1516776. https://doi.org/10.3389/fnhum.2025.1516776.

United States Copyright Office. 2025. Copyright and Artificial Intelligence, Part 2: Copyrightability. A Report of the Register of Copyrights. January. www.copyright.gov/ai/ Copyright-and-Artificial-Intelligence-Part2-Copyrightability-Report.pdf.

Vanutelli, Maria Elide, Marco Salvadore and Claudio Lucchiari. 2023. “BCI Applications to Creativity: Review and Future Directions, from little-c to C2.” Brain Sciences 13 (4): 665. https://doi.org/10.3390/brainsci13040665.

Wadeson, Amy, Anton Nijholt and Chang S. Nam. 2015. “Artistic brain-computer interfaces: state-of-the-art control mechanisms.” Brain-Computer Interface 2 (2–3): 70–75. http://dx.doi.org/10.1080/2326263X.2015.1103155.

Winickoff, David and Rebecca King. 2025. “Why innovation needs smarter governance, not just faster tech.” OECD Blog, June 10. www.oecd.org/en/blogs/2025/06/why-innovationneeds-smarter-governance-not-just-faster-tech.html.

Zander, Thorsten O., Christian Kothe, Sabine Jatzev and Matti Gaertner. 2010. “Enhancing Human-Computer Interaction with Input from Active and Passive Brain-Computer Interfaces.” In Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction, edited by Desney S. Tan and Anton Nijholt, 181–99. London, UK: Springer. https://doi.org/10.1007/978-1-84996-272-8_11.

About CIGI

The Centre for International Governance Innovation (CIGI) is an independent, non-partisan think tank whose peer-reviewed research and trusted analysis influence policy makers to innovate. Our global network of multidisciplinary researchers and strategic partnerships provide policy solutions for the digital era with one goal: to improve people’s lives everywhere. Headquartered in Waterloo, Canada, CIGI has received support from the Government of Canada, the Government of Ontario and founder Jim Balsillie.

À propos du CIGI

Le Centre pour l’innovation dans la gouvernance internationale (CIGI) est un groupe de réflexion indépendant et non partisan dont les recherches évaluées par des pairs et les analyses fiables incitent les décideurs à innover. Grâce à son réseau mondial de chercheurs pluridisciplinaires et de partenariats stratégiques, le CIGI offre des solutions politiques adaptées à l’ère numérique dans le seul but d’améliorer la vie des gens du monde entier. Le CIGI, dont le siège se trouve à Waterloo, au Canada, bénéficie du soutien du gouvernement du Canada, du gouvernement de l’Ontario et de son fondateur, Jim Balsillie.

Credits

Research Director, Digitalization, Security & Democracy Aaron Shull

Director, Programs Dianna English

Senior Program Manager Jenny Thiel

Publications Editor Lynn Schellenberg

Graphic Designer Sepideh Shomali

Copyright © 2026 by the Centre for International Governance Innovation

The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Centre for International Governance Innovation or its Board of Directors.

For publications enquiries, please contact publications@cigionline.org.

The text of this work is licensed under CC BY 4.0. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

For reuse or distribution, please include this copyright notice. This work may contain content (including but not limited to graphics, charts and photographs) used or reproduced under licence or with permission from third parties.

Permission to reproduce this content must be obtained from third parties directly. Centre for International Governance Innovation and CIGI are registered trademarks.

67 Erb Street West Waterloo, ON, Canada N2L 6C2 www.cigionline.org

Turn static files into dynamic content formats.

Create a flipbook