Bird & Bird senior associate Nora Santalu discusses what it’s like advising on the cutting edge of AI and biometric technologies and the regulatory grey areas in a booming industry

Ahead of this afternoon’s Legal Cheek virtual student event ‘What does an AI lawyer do at Bird & Bird?’, we caught up with Nora Santalu, senior associate in the firm’s privacy and data protection team in London, to hear about the legal and ethical challenges of biometrics and AI, regulations shaping the field and what life is like day-to-day in her practice.
Despite all the buzz around artificial intelligence, do you know what an ‘AI lawyer’ really does? As AI and other technologies transform entire industries, this unique role is coming to the forefront and it’s more important than ever to keep in the know.
Being the cool tech-bro of the legal world, it’s no surprise that many of Bird & Bird’s lawyers have technological leanings. Initially set on going into the sciences, Santalu found herself nudged towards law by family influence — yet she never lost her fascination with technology. As a student she even worked at a startup building electric unicycles, where she worked and liaised closely with engineers. A casual lunchtime discussion with the tech team about the Silk Road (the notorious dark web marketplace) sent her down a YouTube research rabbit hole to figure out how such systems operate. It was that curiosity that led her to write a university dissertation on Bitcoin and money laundering, which in turn helped her land a training contract focusing on anti money-laundering in payment technologies including cryptocurrencies. By 2018, with GDPR coming into force, she was drawn into the world of data privacy and emerging tech, and Bird & Bird seemed a natural next step.
Currently, Santalu works at the intersection of law and cutting-edge tech products, mostly in the realm of AI, computer vision and biometrics. “Biometrics”, Santalu explains, “historically meant any technology that can identify you by your body”. Fingerprint and facial recognition are common examples, but it goes much further. “Even a person’s voice is unique enough to serve as an identifier, think of how you’d recognise a family member just from hearing them call out from another room.” Less obviously, our behavioural quirks can be biometric markers too. “Even the speed and pattern of your typing, for instance, can give away your identity,” Santalu continues. Some of these tools run in the background and are used in fraud prevention to protect people from hackers and fraudsters. Biometrics is truly more than just the passport e-gate face scan and its applications are now stretching beyond just identifying people. One especially positive use case is in healthcare. “AI systems can analyse bodily data to detect illnesses, such as detecting early signs of dementia from the changes in eye movements,” Santalu explains. It’s a powerful illustration of how AI driven by biometric data (in this case, images of the body) can potentially save lives. “These developments are exciting”, she says “but they also force us to consider the privacy implications they might have”.
Advising on these technologies means navigating understandably complex, evolving legal frameworks. Two pillars underpin much of her work: the familiar General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act. A common theme in both, she explains, is transparency. “Users (and regulators) expect to know what new technologies are doing and why,” she says. “Products need to be explainable. Both GDPR and the AI Act stress that AI shouldn’t be a mysterious black box, and it must be understandable to individuals.” Another key focus is issue spotting for potential misuse. Every technology might be designed with a good purpose, but a lawyer’s job is to think about how it could be misused in practice. Santalu advises clients to “close the gaps” to build in safeguards so that bad actors can’t repurpose an AI tool for harmful ends. By anticipating worst-case scenarios, she helps companies tweak their products and policies to prevent unwelcome surprises down the line and help comply with the law.
With AI law constantly evolving, keeping up with international developments has become part of the job. Europe may be leading with its comprehensive AI Act, but we’re now seeing a spread of AI laws around the globe. “Countries are rolling out their own AI regulations, and companies operating across borders face a complicated network of rules to navigate,” Santalu explains. “We’re seeing AI laws pop up everywhere.” Different jurisdictions often cover different bases. Just within the US for example, Colorado’s approach looks at high-risk AI systems, but doesn’t address generative AI, whereas California’s new rules are predominantly focused on generative AI.
On the other hand, the EU AI Act differentiates itself in that it casts a very wide net — regulating AI from general-purpose AI models (like large language models) to specific use cases classified by risk, even outright prohibiting certain practices such as using AI for subliminal manipulation. “By contrast,” Santalu says, “the UK is adopting a cautious wait and see stance”. She notes “that Britain, at least for now, isn’t rushing to copy the EU AI Act. Instead, the UK has thus far relied on existing laws (like the UK GDPR) and guidance from regulators such as the Information Commissioner’s Office”, and today the UK Government has launched a new regulatory framework for artificial intelligence, which focuses on AI sandboxes in which regulations can be temporarily be relaxed or adapted. This concept, which also exists under the EU AI Act, is becoming a more popular choice for regulation with the US Senator Cruz also recently proposing it at the US federal level.
Even though she often works independently on her matters, collaboration is a big part of her day-to-day routine as well. Bird & Bird’s tech law team operates very much as a collective because “it’s always better to have two pairs of eyes,” she notes. Also, her colleagues have their own specialisms. For instance, while she works on all types of AI, she’s the go-to for biometrics and fraud prevention, others focus on areas like medical AI devices, contracting for AI or IP in AI which can be useful for certain client problems. Crucially, this collaborative approach is baked into the culture of the firm. She describes Bird & Bird’s environment as highly supportive and flat in hierarchy. “People are very smart, but also very friendly,” she says, noting that there isn’t “an attitude of senior lawyers pulling rank or hoarding knowledge”.
Trainees and NQs are especially encouraged to “speak their mind”. She appreciates being challenged by junior colleagues who aren’t just “yes-people” as it shows they’re “thinking deeply and passionately about the issues”. She stresses that the junior lawyers who stand out are the ones with an eagerness to learn, those who might even surprise her with a new insight she hadn’t come across.
As we wrap up, the conversation turns to advice for those aspiring to enter the legal profession. “Honestly I would say it’s not a short race,” she emphasises. Even if you don’t snag that coveted training contract on the first try, you can still get to where you want to be if you have a true passion for it. Plenty of her colleagues have taken non-linear paths. Far from being wasted time, those detours often turn into advantages, bringing unique perspectives and skills to their legal careers. The key, she suggests, is to view each experience as a learning opportunity that could benefit you later.
So, for students nervous about the competitive training contract process or feeling overwhelmed by the need to be “commercially aware”, her advice is to keep the faith. If you’re genuinely interested and keep learning, eventually you will find your way into that role you’re aiming for.
Nora Santalu will be speaking at this afternoon’s virtual student event,’What does an AI lawyer do at Bird & Bird? Apply Now.
About Legal Cheek Careers posts.