A courtroom in Missouri, 2005. Christopher Simmons, seventeen years old, has been sentenced to death for murder. His case reaches the U.S. Supreme Court. The question before the justices is not whether Simmons committed the crime — he did — but whether executing someone for a crime committed as a juvenile constitutes cruel and unusual punishment.
The Court rules 5–4 that it does. Justice Anthony Kennedy, writing for the majority, cites an unusual body of evidence: not legal precedent alone, but neuroscience. Adolescent brains, the Court acknowledges, are structurally and functionally different from adult brains. The prefrontal cortex — the region responsible for impulse control, planning, and the weighing of consequences — is not fully developed until the early-to-mid twenties. Adolescents, the Court concludes, possess less culpability. They are constitutionally different agents.
Roper v. Simmons did not simply change sentencing law. It quietly inserted a volatile idea into the machinery of justice: that the biological architecture of the brain should shape our moral and legal judgments about blame.
That idea has only grown louder. In 2023, Stanford neurobiologist Robert Sapolsky published Determined, a 528-page argument that free will is an illusion — not as philosophy, but as science. Every action you take, Sapolsky argues, is the inevitable product of your neurons, your hormones, your childhood, your genes, your culture, and the evolutionary pressures that shaped your species over millions of years. There is no ghost in the machine. There is no moment where a disembodied "you" steps outside the causal chain and chooses.
If Sapolsky is right, the implications are staggering. Not just for criminal justice, but for performance reviews, parenting, merit-based hiring, moral praise, and the entire scaffolding of institutions built on the assumption that individuals deserve what they get.
But here is the complication: most people — including most philosophers — are not convinced. The 2020 PhilPapers Survey found that 59% of professional philosophers accept or lean toward compatibilism, the view that free will and determinism can coexist. The neuroscience that Sapolsky marshals is largely undisputed. What he concludes from it is not.
So this is the territory we're entering. Not the question of whether biology shapes behaviour — it obviously does. But the harder question: does biology excuse behaviour? And if it does, even partially, what happens to the concept of blame?
Before we can ask whether free will exists, we need to understand what actually happens inside the brain when a human being perceives, evaluates, and acts. The neuroscience of decision-making is not a single pathway or a single region. It is a distributed network of structures that evolved over hundreds of millions of years, layered on top of each other like geological strata — ancient threat-detection systems at the base, recently evolved deliberative systems at the top, and a tangle of bidirectional connections in between.
Here is what that network looks like, traced from stimulus to action.
Every decision begins with sensory input — light hitting the retina, sound waves vibrating the eardrum, the pressure of a hand on your shoulder. These raw signals travel along dedicated neural pathways to the thalamus, a relay station deep in the centre of the brain. The thalamus performs a first, crude sorting: it routes visual information to the visual cortex, auditory information to the auditory cortex, and so on. But it also does something else. It sends a fast, low-resolution copy of the incoming signal directly to the amygdala — before the cortex has had time to process the information in detail.
This is the brain's early-warning system, first characterised by neuroscientist Joseph LeDoux as the "low road." It explains why you flinch at a stick on a hiking trail before you consciously recognise it as a stick and not a snake. The amygdala has already tagged the input as potentially dangerous and triggered a cascade of physiological responses — increased heart rate, adrenaline release, muscle tension — in roughly 12 milliseconds. Conscious awareness catches up about half a second later.
The amygdala is an almond-shaped cluster of nuclei nestled in the medial temporal lobe, one in each hemisphere. Its primary function is not to produce fear, as popular accounts often suggest, but to assign emotional significance to incoming stimuli. It answers the most ancient biological question: is this thing good for me, bad for me, or irrelevant?
It does this through two principal subdivisions. The basolateral complex (BLA) receives sensory information from the cortex and thalamus and learns to associate stimuli with their emotional outcomes — this is the neural substrate of classical conditioning. A child who touches a hot stove forms, in the BLA, an association between the sight of the stove and the pain that followed. The central nucleus (CeA) then translates these appraisals into bodily responses: it projects to the hypothalamus (triggering hormonal cascades), the brainstem (altering heart rate and breathing), and the periaqueductal grey (producing freezing or fight-or-flight responses).
Crucially, the amygdala operates largely beneath conscious awareness. Both conditioned and unconditioned emotional stimuli activate the amygdala and trigger autonomic nervous system responses even when the stimulus is masked — presented so briefly that the participant cannot report seeing it. This is implicit emotional processing: your body is responding to a threat before "you" know the threat is there.
If the amygdala can hijack behaviour before conscious awareness kicks in — triggering a flinch, a surge of rage, a reflexive avoidance — then the notion that every action passes through a checkpoint of conscious deliberation is already in trouble. The question is how much of our morally significant behaviour is shaped by these fast, sub-cortical systems versus the slower, deliberative ones.
While the amygdala asks "is this dangerous?", the hippocampus asks "have I seen this before, and what happened?" Located adjacent to the amygdala in the medial temporal lobe, the hippocampus is the brain's primary engine for forming new episodic memories — memories of specific events bound to a time and place.
The hippocampus and amygdala are not separate systems. They are deeply interconnected, and their interaction shapes what we remember and how we decide. When the amygdala tags an experience as emotionally significant, it modulates hippocampal encoding — strengthening the synaptic connections that will consolidate that memory into long-term storage. This is the "emotional tagging" hypothesis: emotionally arousing events are remembered better because the amygdala effectively tells the hippocampus, this one matters — encode it deeply. Research using direct intracranial recordings in human patients has confirmed that successful emotional memory encoding depends on coordinated high-frequency activity between the amygdala and hippocampus, particularly through amygdala theta rhythms that pace hippocampal gamma oscillations.
This has a profound implication for moral agency. Every decision you make is shaped by your personal history of emotional memories — memories that were themselves shaped by amygdala-hippocampal interactions you never consciously controlled. A person who was bitten by a dog at age four does not choose to feel fear around dogs at age forty. That fear is encoded in neural circuits laid down decades earlier, and it influences behaviour — risk assessment, avoidance, even aggression — through pathways that operate largely outside of deliberate control.
The hippocampus also provides contextual information that shapes whether the amygdala fires at all. A loud bang in a war zone activates a very different amygdala response than a loud bang at a fireworks show, because the hippocampus supplies the contextual frame. When this contextual processing fails — as it does in post-traumatic stress disorder — the amygdala treats safe environments as dangerous ones, and the person responds accordingly. They are not "choosing" to overreact. Their hippocampal-amygdala circuit is misfiring.
If the amygdala is the accelerator and the hippocampus is the rearview mirror, the prefrontal cortex (PFC) is the brakes, the steering wheel, and the GPS. It is the most recently evolved region of the human brain, vastly expanded compared to other primates, and it is responsible for what neuroscientists call executive function: planning, abstract reasoning, impulse inhibition, working memory, and the ability to weigh long-term consequences against immediate rewards.
The PFC is not one structure but a family of interconnected sub-regions, each with a distinct role in decision-making:
Holds multiple pieces of information in mind simultaneously. Integrates harm severity with culpability to determine appropriate punishment. Central to multi-step reasoning and goal maintenance.
Integrates emotional signals from the amygdala with cognitive assessments. Patients with vmPFC damage (like Phineas Gage) show intact reasoning but catastrophically impaired real-world decision-making — they "know" what's right but cannot "feel" it.
Rapidly signals stimulus values once learning is complete. Updates reward expectations when contingencies change. Enables the shift from "this used to be rewarding" to "this is no longer rewarding" — the neural basis of adaptive behaviour.
Detects when actions conflict with goals. Fires when you make a mistake — and research shows that reduced ACC activity during error monitoring predicts re-arrest in released offenders. It is, in a real sense, the neural signature of learning from one's own errors.
The critical insight is this: the PFC does not operate independently. It is in constant bidirectional communication with the amygdala. Neurons in the medial PFC and orbitofrontal cortex strongly project to the amygdala, and also receive substantial projections back from it — an evolutionarily conserved circuit found in humans, primates, and rodents. When this circuit functions well, the PFC can regulate amygdala reactivity — dampening fear responses, overriding impulsive urges, enabling you to choose the long-term reward over the immediate one. This is what cognitive neuroscientists mean by "top-down regulation": the cortex sending inhibitory signals downward to subcortical systems.
But when the PFC is compromised — by immaturity, by developmental trauma, by chronic stress, by lesions, by alcohol, by fatigue — this top-down regulation weakens. The amygdala gains the upper hand. Impulses are not inhibited. Fear responses are not modulated. The person acts on the ancient "low road" rather than the recently evolved "high road."
When a decision finally crystallises, it travels from the PFC and associated motor planning areas through the basal ganglia — which selects and sequences actions, automates habits, and serves as a gateway between intention and movement — to the motor cortex, which executes the physical behaviour. The basal ganglia deserve particular attention because they are the seat of procedural memory: the implicit, unconscious knowledge of how to do things. Riding a bicycle, typing on a keyboard, driving a familiar route home — these are all basal ganglia-driven behaviours that were once conscious and deliberate but have been automatised through repetition.
Much of moral behaviour follows the same pattern. A person raised in a household where honesty was consistently modelled and reinforced develops habitual honesty — not through conscious deliberation at each moment, but through basal ganglia-mediated habit circuits. A person raised in an environment where deception was adaptive — where telling the truth was punished and lying was rewarded — develops habitual dishonesty through the same neural machinery. The habits are different. The machinery is the same.
This is the picture that emerges from four decades of cognitive neuroscience: human decision-making is the product of a complex, distributed network in which ancient emotional systems (amygdala), memory systems (hippocampus), habit systems (basal ganglia), and recently evolved deliberative systems (prefrontal cortex) compete and cooperate, often beneath the threshold of conscious awareness. The "decision" that reaches consciousness — the one that feels like "yours" — is the output of this network, not its cause.
Whether that output constitutes "free will" depends entirely on what you mean by the term. And that is where the real argument begins.
The modern neuroscience of free will begins, somewhat improbably, with a clock and a wrist-flick.
In 1983, neurophysiologist Benjamin Libet asked participants to flex their wrists at a time of their own choosing, while watching a modified clock face to note the precise moment they felt the conscious urge to move. Simultaneously, he recorded their brain activity using electroencephalography (EEG).
The finding was unsettling. A slow buildup of electrical activity — the "readiness potential" (Bereitschaftspotential) — began approximately 550 milliseconds before the movement. But the participants' reported awareness of their intention to move occurred only about 200 milliseconds before the movement. The brain was "getting ready" a full 350 milliseconds before consciousness showed up.
Source: Libet et al., Brain, 1983. Readiness potential precedes conscious awareness by ~350 ms.
The implications, as interpreted by many neuroscientists, seemed clear: your brain initiates actions before "you" — the conscious, deliberating self — are even aware of them. Consciousness, in this reading, is not the author of your decisions but a narrator arriving after the fact, constructing a post-hoc story of agency.
This is the finding that launched a thousand headlines. And it has been both replicated and significantly challenged.
In 2012, Aaron Schurger, Jacobo Sitt, and Stanislas Dehaene published a study in the Proceedings of the National Academy of Sciences proposing that the readiness potential is not a neural signature of "decision preparation" at all. Instead, it may simply reflect stochastic fluctuations — random neural noise — that, when averaged across many trials and aligned to the moment of action, produce what looks like a deliberate ramp-up. The readiness potential, in this model, is an artefact of the experimental averaging method, not evidence of unconscious determination.
A 2021 meta-analysis of nearly forty Libet-style replications confirmed that while the basic temporal pattern holds, the effect is uncertain and based on a limited evidence base. A 2023 study from the Higher School of Economics found that the moment of "conscious intention" can be manipulated by the experimental procedure itself — the instruction to introspect about the moment of decision actually creates the sense that it occurred at a specific point in time.
The experiment tells us that voluntary actions are not initiated by a bolt of pure conscious will descending into an inert brain. Motor preparation is gradual, distributed, and begins before we can articulate an intention.
But the experiment does not tell us that consciousness is causally irrelevant. Even Libet himself argued that consciousness retains a "veto power" — the ability to cancel an action in the roughly 200 ms between awareness and execution.
More fundamentally: the Libet experiment asks participants to make trivial, arbitrary movements with no consequences. The gap between "when did you feel like moving your wrist?" and "should I commit armed robbery?" is not a gap that neuroscience has bridged.
Robert Sapolsky does not rest his case on Libet. His argument in Determined layers neuroscience, endocrinology, genetics, epigenetics, developmental biology, and evolutionary psychology to build a picture of human behaviour as the inevitable outcome of causes stretching back not just seconds but geological epochs.
Sapolsky's most compelling move is not any single piece of evidence but the cumulative picture. Children raised in high-stress environments show reduced grey matter volume in the prefrontal cortex, hyperreactivity in the amygdala, and dysregulated stress-response systems. These are structural and functional alterations that affect impulse control, emotional regulation, and decision-making for life. Did the five-year-old who experienced this adversity choose it? Obviously not. And if that adversity measurably impairs the very brain systems we rely on for self-control, in what sense does the adult who acts impulsively bear full moral responsibility?
Sapolsky insists the logic applies universally. The child who had loving parents and excellent schools was also shaped by causes beyond their control. The "self-made" person was made by circumstances they did not choose — including the circumstance of having a brain capable of discipline and ambition.
The debate has crystallised into three positions, each with profound institutional implications.
Sapolsky · Harris · Pereboom
All behaviour is determined by prior causes; no one truly deserves praise or blame. Punishment should exist only for instrumental reasons — deterrence, incapacitation, rehabilitation — never as retribution.
Sapolsky's analogy: we once blamed epileptics and attributed their seizures to demonic possession. Neuroscience made blame incoherent. The same process should occur for all behaviour.
Frankfurt · Dennett · Fischer · Wolf
The kind of freedom that matters is not freedom from causation but freedom from specific constraints — coercion, compulsion, severe mental illness. An agent who acts from rational deliberation and is responsive to reasons is "free enough" for moral responsibility.
This is the majority position: 59% of professional philosophers. It maps closely to how legal systems already operate.
Jones decides to vote for Candidate A. Unbeknownst to him, a neuroscientist has implanted a device that will force Jones to vote for A if he shows any inclination otherwise. But Jones votes for A entirely on his own. Black never intervenes.
Is Jones morally responsible? Most people's intuition says yes — even though Jones could not have done otherwise. Frankfurt's conclusion: responsibility doesn't require the ability to do otherwise. It requires acting from your own reasons, in the absence of actual interference.
In his 1962 essay "Freedom and Resentment," P. F. Strawson argued that the entire debate about determinism and responsibility may be beside the point. What matters is not whether our actions are determined, but that we are constitutively the kind of beings who have reactive attitudes — resentment, gratitude, indignation, love, hurt feelings — toward each other's displays of goodwill or ill will.
When someone steps on your foot deliberately, you feel resentment. When they do it accidentally, you don't. This distinction — between actions that manifest regard for you and actions that don't — is the foundation of moral life. And it is not a theoretical position we adopt after careful reflection. It is a deeply embedded feature of human psychology that we could no more abandon than we could abandon language.
There are cases where we suspend the reactive attitudes — toward a small child, toward a person with severe schizophrenia. We shift to what Strawson calls the "objective stance," treating the person as someone to be managed, not as a full participant in the moral community. But the idea that we should adopt the objective stance toward everyone strikes Strawson as not just wrong but practically unthinkable.
There is a concept in philosophy that acts as a kind of solvent on our confidence in blame: moral luck.
The idea, developed independently by Thomas Nagel and Bernard Williams in 1979, is deceptively simple. Two drivers drink too much at a party. Both get behind the wheel. Both swerve across the centre line. For one, the road is empty. For the other, a child is crossing. The first driver gets a traffic ticket. The second kills a child and faces manslaughter charges, social ostracism, and a lifetime of guilt.
The two drivers made identical choices. The only difference is luck — whether a child happened to be on the road. Yet we blame the second driver far more severely.
Both drivers were equally negligent. Only one killed a child. We judge the killer more harshly — for an outcome beyond his control.
The German who becomes a Nazi collaborator and the one who emigrates in 1929 may have identical characters. Only one faces the moral test.
Some people are born with calm temperaments and strong impulse control. Others are not. These differences are unchosen — yet profoundly shape moral behaviour.
How our actions are determined by antecedent causes — genetics, upbringing, culture. This is, essentially, the free will problem itself.
The challenge of moral luck is not that it makes blame impossible. It makes blame uncomfortable with itself. If we believe that people should only be judged for things within their control, then the pervasiveness of luck in moral outcomes represents a deep tension in our practices of praise and blame. Robert Hartman, in his 2019 work In Defense of Moral Luck, argues the tension is ultimately unstable: either we accept an "absolutely fair" morality in which no one is praised or blamed for anything influenced by luck — which leads to moral responsibility scepticism, since everything is influenced by luck — or we accept that morality is genuinely unfair.
Nowhere is the tension between biology and blame more concrete than in criminal law. The U.S. Supreme Court decisions beginning with Roper v. Simmons represent perhaps the most consequential example of neuroscience directly reshaping the architecture of legal blame.
Supreme Court bans juvenile death penalty. Cites neuroscience: adolescent brains have immature prefrontal cortex, less impulse control, greater susceptibility to peer influence.
Bans life without parole for non-homicide juvenile offenders. Justice Kennedy reaffirms that juveniles have less culpability due to immature development.
Bans mandatory life-without-parole for all juvenile offenders. Court cites developmental brain research directly: adolescent moral character is not fully formed.
Makes Miller retroactive. Reinforces the neuroscience of adolescence — youth are a fundamentally different category of offender.
The scientific evidence was drawn from neuroimaging studies showing that brain processes underlying impulse control, reward motivation, and emotional regulation are immature during adolescence. The prefrontal cortex undergoes significant development from adolescence into the early twenties. The social-emotional system — which drives sensation-seeking and emotional reactivity — matures earlier. The result is a period during which adolescents have the emotional accelerator fully engaged but the cognitive brakes still under construction.
Approximate adolescent maturation as % of adult capacity. Based on developmental neuroscience literature (Casey et al.; Steinberg, 2008; Cohen & Casey, 2014). Bars represent relative maturation — not absolute brain size.
If immature brains justify lesser punishment for seventeen-year-olds, what about twenty-year-olds? What about twenty-five-year-olds with developmental trauma that delayed prefrontal maturation? What about forty-year-olds with brain lesions affecting impulse control?
The logic of neuroscience-based mitigation does not come with a natural stopping point. The most dangerous offenders — those with the greatest neurological impairment of self-control — would, on this logic, be the least culpable. This is a conclusion most legal systems find intolerable.
Terry Maroney, a legal scholar at Vanderbilt, has documented an important counterpoint. When individual juvenile defendants — as opposed to juveniles as a group — have attempted to use brain science in their specific cases, they mostly fail. Courts are reluctant to accept that a particular individual's brain immaturity caused their crime, partly because neuroimaging cannot yet make reliable claims about individuals, and partly because accepting such arguments threatens to dissolve the mens rea requirement foundational to criminal law.
The emerging consensus in neurolaw: biology should inform sentencing — especially regarding rehabilitation and recidivism risk — without undermining the foundation of criminal responsibility. Neuroscience is a lens, not a verdict.
So where does this leave us?
Not with an answer, but perhaps with a better question — or rather, three better questions.
The science is not in dispute. Behaviour is the product of neural processes, which are the product of developmental, genetic, hormonal, and environmental forces. The question is what follows. Sapolsky and Mitchell look at the same evidence and reach opposite conclusions about agency. This tells us something important: the question of free will is not one that neuroscience alone can settle. It requires philosophical argument about what kind of agency is sufficient for responsibility — and that argument is not reducible to brain scans.
This is the question that actually drives legal systems, institutional design, and everyday moral life. We already recognise that certain conditions — severe mental illness, extreme youth, coercion, cognitive impairment — diminish or eliminate blame. We already have a spectrum, not a binary. What neuroscience does is not demolish this spectrum but refine it. It tells us, with increasing precision, which brain systems are involved in self-regulation, how those systems develop, and what conditions impair them. This is enormously valuable — not as a philosophical argument against free will, but as an empirical input into the practical question of where blame is appropriate and where it is not.
This is where moral luck bites hardest. If constitutive luck — the luck of being born with a well-functioning prefrontal cortex, supportive parents, and a stable community — plays a significant role in moral behaviour, then blame is always, to some degree, unfair. The person who resists temptation may be exhibiting not superior moral character but superior neurological equipment.
And yet. P. F. Strawson may be right that the reactive attitudes — resentment, indignation, gratitude, guilt — are not optional features of human psychology that we can choose to discard after reading the right neuroscience paper. They are constitutive of what it means to treat another person as a full moral agent. To adopt the "objective stance" toward everyone — to treat all human behaviour as the output of biological machinery, to be managed rather than judged — is not just practically difficult. It may be a form of disrespect. It denies people membership in the moral community. It says: you are a thing that happens, not a person who acts.
Not because blame reflects some ultimate metaphysical truth about desert, but because it is woven into the fabric of human relationships, institutions, and self-understanding in ways that cannot be surgically removed without damaging the patient.
What we can do — what the science genuinely helps us do — is make blame smarter. We can recognise that the capacity for self-control is not equally distributed, that it is shaped by forces beyond individual control, and that our institutions should reflect this. We can build criminal justice systems that prioritise rehabilitation over retribution. We can design educational systems that strengthen executive function rather than simply punishing its absence. We can approach human failure with the understanding that the line between "won't" and "can't" is far blurrier than our moral intuitions suggest.
Sapolsky ends Determined with the analogy of epilepsy — a condition that was once blamed and is now understood. He hopes that all behaviour will eventually be understood the same way.
Perhaps. But there is a crucial difference between epilepsy and moral agency. No one is grateful that someone else has epilepsy. No one resents a seizure. The reactive attitudes don't apply. For human actions — choices made, harms inflicted, kindnesses offered — they do. And that distinction, whether it is ultimately illusory or not, is the ground on which moral life is built.
The question is not whether to stand on that ground. We have no choice. The question is whether we can stand on it with our eyes open — knowing that the ground is shakier than we once believed, and building accordingly.
Sapolsky, R. M. (2023). Determined: A Science of Life Without Free Will. Penguin Press.
Mitchell, K. J. (2023). Free Agents: How Evolution Gave Us Free Will. Princeton University Press.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity. Brain, 106(3), 623–642.
Schurger, A., Sitt, J. D., & Dehaene, S. (2012). An accumulator model for spontaneous neural activity prior to self-initiated movement. PNAS, 109(42).
Braun, N., et al. (2021). A meta-analysis of Libet-style experiments. Neuroscience & Biobehavioral Reviews.
Bredikhin, D., et al. (2023). (Non)-experiencing the intention to move. Neuroscience Research.
LeDoux, J. E. (1996). The Emotional Brain. Simon & Schuster.
Bechara, A., Damasio, H., & Damasio, A. R. (2000). Emotion, decision making and the orbitofrontal cortex. Cerebral Cortex, 10, 295–307.
Gangopadhyay, P., et al. (2021). Prefrontal-amygdala circuits in social decision-making. Nature Neuroscience, 24, 5–18.
Zheng, J., et al. (2024). Neuronal activity in the human amygdala and hippocampus enhances emotional memory encoding. Nature Human Behaviour.
Richter-Levin, G. & Akirav, I. (2003). Emotional tagging of memory formation. Progress in Neurobiology.
Aharoni, E., et al. (2013). Neuroprediction of future rearrest. PNAS, 110(15), 6223–6228.
Frankfurt, H. G. (1969). Alternate possibilities and moral responsibility. Journal of Philosophy, 66(23), 829–839.
Strawson, P. F. (1962). Freedom and Resentment. Proceedings of the British Academy, 48, 1–25.
Nagel, T. (1979). Moral Luck. In Mortal Questions. Cambridge University Press.
Williams, B. (1981). Moral Luck. In Moral Luck: Philosophical Papers 1973–1980. Cambridge University Press.
Dennett, D. C. (1984). Elbow Room: The Varieties of Free Will Worth Wanting. MIT Press.
Fischer, J. M. & Ravizza, M. (1998). Responsibility and Control. Cambridge University Press.
Hartman, R. J. (2019). In Defense of Moral Luck. Cambridge University Press.
Pereboom, D. (2014). Free Will, Agency, and Meaning in Life. Oxford University Press.
Bourget, D. & Chalmers, D. J. (2023). The 2020 PhilPapers Survey. Philosophers' Imprint.
Roper v. Simmons, 543 U.S. 551 (2005).
Graham v. Florida, 560 U.S. 48 (2010).
Miller v. Alabama, 567 U.S. 460 (2012).
Montgomery v. Louisiana, 577 U.S. 190 (2016).
Buckholtz, J. W., et al. (2015). From blame to punishment: Disrupting prefrontal cortex activity reveals norm enforcement mechanisms. Neuron.
Maroney, T. A. (2013). The False Promise of Adolescent Brain Science in Juvenile Justice. Notre Dame Law Review, 85(1).
Focquaert, F., et al. (2018). Neurobiology and crime: A neuro-ethical perspective. Journal of Criminal Justice.
Scott, E. & Steinberg, L. (2008). Rethinking Juvenile Justice. Harvard University Press.
Casey, B. J., et al. (2008). The Adolescent Brain. Developmental Review, 28(1), 62–77.