Skip to main content

Inspire - Lent 2026

Page 1


Editorial

This issue of Inspire, a completely pupil-run publication, showcases the wide variety of interests and talents displayed by pupils. Each article has been written based of pupils’ choices allowing for the expansion of knowledge beyond the school curriculum and letting readers educate themselves. We have been fortunate to have the opportunity to work on it and hope you enjoy this edition as much as we do.

Thank you

We would like to thank Mr Moule for providing us with this opportunity and continuing to support us whilst providing guidance whenever needed. Furthermore, we would like to thank Mrs Jordan for all her help with formatting and organisation in order to make this edition possible. Finally, thank you to all the pupils who have provided their valuable time to spread their knowledge and inspire others.

French influence on Russian language and literature

The influence of French on Russian language and literature is a striking example of how cultural prestige can reshape national identity. Beginning in the 18th century, particularly under the westernising reforms of Peter the Great and later Catherine the Great, the Russian aristocracy embraced French as the language of diplomacy, education, and high society. By the 19th century, many nobles spoke French more fluently than Russian, a reality vividly portrayed in works such as War and Peace by Leo Tolstoy, where entire passages appear in French to signal class and cultural allegiance. This linguistic dominance not only enriched Russian vocabulary with Gallic loanwords but also shaped literary style, social satire, and themes of identity, as writers grappled with the tension between European sophistication and emerging Russian nationalism.

French cultural influence was most pronounced within the aristocratic elite, where French was adopted as the main language of the court, and it became a sign of sophistication. The adoption of French into the royal courts among the aristocracy began during the reigns of Peter the Great and Catherine the Great as they attempted to join Europe. This is referred to as the ‘Westernisation’ or ‘Europeanisation’ of Russia. To join the European world, it was considered crucial to have a knowledge of foreign languages, as without such familiarity

there could have been ‘no engagement with modernity, with the Europe of the Ages of Reason, the Enlightenment … the industrial revolution, or the numerous scientific discoveries of the eighteenth and nineteenth centuries.’ (ARGENT, OFFORD and RJÉOUTSKI, 2015) French became the language that connected the high society of Russia to the rest of Europe, particularly as it became synonymous with civility and elite culture, functioning as the language ‘of sociability in the salon, at the soirée, the ball, the theatre, and the opera,’ as well as ‘the language of fashion, coiffure, cuisine, and new pastimes such as card-playing and gambling.’ (ARGENT, OFFORD and RJÉOUTSKI, 2015.) The dominance of French then started to extend into private and daily life, which had previously been the domain of Russian, as members of the nobility began to write personal correspondence and diaries in French rather than Russian. In the Golitsyn archives, Aleksandr Mikhailovich Golitsyn wrote letters in French to members of his family during the 1750’s (BERELOWITCH and Offord, 2015). In addition to this, private personal records also reflected this linguistic hierarchy. As Princess Katya Golitsyn observed in reference to aristocratic diaries, ‘all the daily things that they must do are all written in Russian… [while] any slightly more societal mentions were written in French, so the servants couldn’t read it’ (K. Golitsyn, personal communication, 30/12/2025). This suggests that French gained more status

and was now perceived as the appropriate language for social matters, not just in court but at home as well, relegating Russian further to be used for the mundane activities of life. These practices exacerbated the social divide as language increased the barrier between different societal classes.

Despite multilingualism being highly valued in the elite community, French increasingly dominated the elite social life. Even Russians who had a command of several languages would speak French to one another at the salon because ‘French was the accepted language in social gatherings of this type’ (ARGENT, OFFORD and RJÉOUTSKI, 2015). This reinforced French as a marker of refinement, not just as a form of communication, illustrating that it functioned as the most socially and culturally useful language for the upper echelons of society, surpassing other modern languages. This view is further demonstrated through the influx of French émigrés who came to teach the aristocracy. The demand for French tutors was so high after the enthronement of Elizabeth in 1747 that ‘desired qualifications were so slight’ that foreign adventurers found it possible to gravitate towards this profession (IGNATIEFF, 1966), demonstrating that prestige in French took precedence over concerns about educational quality. Only later, in 1757, an edict was decreed that ensured that tutors had the correct qualifications (Pokrovskiĭ, 1910). In many aristocratic households, French challenged Russian as it became the primary language of childhood education; as Figes notes, French became ‘the language of all personal relationships as well’ with many noble families speaking French at home, and they ‘hardly spoke Russian at all.’ (Figes, 2003). This demonstrates that the introduction of French into society and the education of the elite contributed to the gradual decline of Russian in daily life as the generations continued.

The dominance of French in the Russian aristocracy had consequences not only for court culture but also for the development of Russian.

French had positioned itself as an integral part of court life and therefore its influence began to extend beyond, into the vocabulary and structure of Russian. One of the most visible ways that this linguistic influence manifested itself was in the incorporation of French vocabulary and expressions in Russian. This came through the adoption of loanwords. These were words taken directly from French, often implemented in areas such as elite life and culture. In addition to direct borrowings, French also influenced Russian through calques, where phrases and structures were translated literally into Russian rather than just borrowed. This influence reinforced the cultural divide between the elite and the wider population through a growing language barrier. The most prominent impact of French loanwords was in areas associated with aristocratic life. Especially areas such as fashion and social etiquette. This reflects the aristocratic perception of French culture as the standard to obtain. Terms connected with aristocratic social life such as café (кафe), restaurant (ресторан), théâtre (театр) and champagne (шампанское) were taken directly from French (Smith, 1996). This emphasises the dominance of French customs within aristocratic life. Similarly, vocabulary associated with fashion and appearance were frequently borrowed with words such as mode (мода) and bijouterie (бижутерия) to describe aspects of dress that were associated with French style. Unlike loanwords, which were visibly foreign, calques demonstrate a deeper influence of French. This is because French modes of expression were directly translated into Russian. Calques from French were especially used ‘to enrich the Russian language in a wide variety of ways with expressions connected with human experience, on the moral, emotional and social planes,’ this lexicon addition stemmed from a lack of vocabulary and provided ‘new meanings for referring to human psychology, feelings, emotions, thoughts and social ways of life’ (Smith, 1996). These borrowings reflected a deep-rooted desire of the elite to align themselves with the Western

European linguistic norms, therefore using language as a marker of their belonging in the West. However, the presence of French in the Russian language increased the isolation of the elite classes from the rest of the nation. French derived vocabulary widened the divide as many people neither spoke French nor identified with its cultural values. This led to criticism from writers and thinkers suggesting that French influence, while culturally and linguistically improving, also generated concern and anxiety about national identity.

Furthermore, French influence also contributed significantly to the improvement of the Russian literary language. Pre-18th-century literary Russian was shaped by Lomonosov’s rigid three-style system which produced an elevated but often clumsy prose. The ‘high’ style of literature, used for genres such as heroic poems relied heavily on Church Slavonic; the ‘middle’ style blended Russian vernacular with Church language; and the ‘low’ style was reserved for comedies, songs or letters, reflecting its lack of sophistication (Smirnova, 2020). This structure laid the foundations that authors could build upon. French influence did not replace Lomonosov’s style; rather, it developed Russian into a language that had elegance, sociability and narrative by blending conversational refinement, stylistic taste and grammatical accuracy. Russian authors therefore actively engaged with French influence. It became a tool to explore identity, class and authenticity.

Pushkin is often credited to have brought Russian literature to its modern standard. As D.S Mirsky famously observed ‘before Pushkin there were writers, after Pushkin there was a literature’ a claim that reflects Pushkin’s successful development of Russian through French (Mirsky, 2021). Pushkin seamlessly blended French stylistic elegance with spoken Russian. Although writing primarily in Russian, many of his early works were composed French, as he was fluent from a young age, and this exposure shaped his literary style immensely. This influence is particularly prominent in

Eugene Onegin, where Pushkin frequently utilises French phrases to reflect the speech of the educated nobility. Through the natural integration of French loanwords relating to noble life - fashion, manners and philosophy - French appeared in the text organically, imitating authentic aristocratic style of speech. Moreover, the novel is often described as ‘European’ in verse, unrestricted by Slavic rhetorical excess. This shift from Lomonosov’s grammatical precision to a more fluid technique of writing supports Vinogradov’s observation that “the norms of literary Russian were established through Pushkin’s creative practice rather than grammatical prescription” (Vinogradov, 1969). Pushkin also borrows from French, not only linguistically but structurally, adopting popular French genres such as witty epigrams and salon verse. His mocking epigram on Count Vorontsov for example includes irony and lightness of tone characteristic of the French Enlightenment. This synthesis helped make Russian literature both accessible and refined, narrowing the gap between the elite and vernacular language.

In addition to Pushkin, Tolstoy was also influenced by French. This can be seen in his works where he used French in direct dialogue in War and Peace. This reflects the use of French at the court by the Russian nobility. Tolstoy uses French to contrast the frivolity of court life, with Russian which signifies sincerity, emotional depth and national belonging. Especially during moments of patriotic significance. This can be seen when Natasha goes to visit her ‘uncle’ and she is drawn in by the folk song and dance. This scene is written entirely in Russian as she unconsciously connects to her national identity. Highlighting the strength of national identity and culture. This contrasts strikingly with the opening of the novel, Tolstoy opens with the French ‘eh bien, mon prince, Gênes et Lucques ne sont plus que des apanages, des estates, de la famille Buonaparte’ (Tolstoy, 1867) with the intention to draw the reading into the superficial and Francophile setting of St Petersburg where

Anna Pavlovna is using French as a social tool to imply status and politeness rather than forming a real connection with the people around her. In conclusion, the French influence on Russian language and literature reveals the complex interplay between cultural admiration and national self-definition. While French once symbolized sophistication and elite status in Russia, its widespread use ultimately prompted writers and thinkers to reflect on what it meant to be authentically Russian. Through this tension, Russian literature developed a richer vocabulary, sharper social critique, and a deeper exploration of identity – transforming foreign influence into a catalyst for one of the world’s most distinctive literary traditions.

‘Faith, Industry and Thrift’: Clara Josephine Wieck’s indispensable credits for R. Schumann’s posthumous success

The surname ‘Schumann’ is most often referred, and it’s not wrong that it is, to Robert Schumann, a German musician of the Romantic era. Despite his immensely paradigmatic and innovative compositions, there is a woman whose talents should not be ignored – Clara Josephine Wieck (Schumann).

In many ways she is not the revolutionary female figure in her field compared to others as such; certainly, she was somewhat a victim of the patriarchal society of the 19th century. Her childhood was permeated by the insidious control of her stifling villainous father Friedrich Wieck, and her adulthood overshadowed by Robert Schumann (though both seemingly smitten for each other) even past his and up to her death. Nevertheless, her name as a piano virtuoso still stands among the most prolific musicians (Liszt, Mendelssohn, Chopin), and she, Clara, is neither Friedrich’s ‘Wieck’ or ‘Mrs Schumann’. She left a serious legacy, where concert pianists began to avoid the simply ‘virtuosic’ performances and create programmes of serious music, including the works of Beethoven, Brahms and Schumann. She also began the tradition of pianists playing from memory and even a cake – Torte à la Wieck – was created to honour her.

‘Behind every successful man there is a strong woman’. Truly, both F. Wieck and R. Schumann owe that to Clara. At the age of nine she made her first debut in the Leipzig Gewandhaus, 1828. With a feisty and passionate playing she astounded Germany and indeed Europe – J.W. von Goethe wrote, ‘The girl has more strength than six boys combined’. The 1835 premiere of her Piano Concerto in A

minor was performed with Mendelssohn conducting, and she was the 16-year-old composer-pianist of this piece. Her successful tours to great European centres prove her father’s rigorous, quasi-scientific method to be of use, attracting students including Robert Schumann. Clara largely amplified her father’s professional career so that even after her estrangement from Wieck, historical memory continued to frame him as the pedagogue who created the child prodigy Clara WieckSchumann.

Friedrich’s suffocating control diminished (but not obliterated) her love for him, though the indelible effect on Clara had somewhat fuelled Robert’s success. She, a famous concert pianist; he, the little-known composer.

In spite of this, she dedicated her musical career pitifully throughout her life to Robert’s path of fame. The very industrious marriage was fraught with the exchange of musical ideas, but nevertheless the power dynamic engulfed Clara down a rabbit-hole of apologies and restraint – for practicing whilst Robert was working and not focusing on her ‘marital obligations’. In the beginning they both acceded (more Robert

Clara Wieck, age 9

than Clara) that she would only give a maximum of one concert per year, giving no more than two lessons a day, and two appearances at court per year.

Undeniably they very much loved one another, but Robert perhaps unconsciously, degraded Clara to a certain extent, especially when it came to the ‘masculine domain’ of composing. Clara’s Piano Trio in G minor (1846) was remarkable its unusual structural maturity. The beautiful architecture is disciplined but intense and the emotional depth evoked is not overtly theatrical compared to that of Robert who favoured more contrast in dynamics and style. She understands form, romantic harmonics and lyrical melody which are exceptionally realised in the last movement with profound emotions. The poised nature of Clara’s compositions are in no way an imitation of Robert’s. She treats the piano as a partner to the strings rather than the superior soloist.

Robert who had never written any piano trio rushed to published three a year later in 1847. Certain aspects of Clara’s composition find reflection in Robert’s D minor trio Op. 63 (the dominant minor of G). The strings in Op. 63 are much more assertive than they had been in his previous compositions. The D minor trio breaks away from Clara’s ideals of coherence but is without doubt responding, if not influenced by her style. However, still Clara dishonours herself – ‘Of course it’s a lady’s work, always lacking in power, and here and there uninspired.’

Together the couple toured northern Germany (1842), Russia (1844), Vienna (1846) and the Netherlands (1853). As a mother of eight, Clara desperately tried not to curtail the time for practicing (which, due to Robert’s inferiority complex discouraged her from doing so), and from the intensity of her concerts – 139 performances in their 14 years together – proves that she had the capability to balance

both aspects of her life. Whether it was the revival of Bach’s preludes (which was rarely attempted publicly during her time), or Beethoven’s Appassionata Sonata interpreted most vivaciously by Clara, and her husband’s radical and psychologically intense Symphonic Études and Fantaisie in C, her canonical finger works and prodigious memory (she was able to remember whole pieces) promoted her and

Cadenzas for the first and last movements of Mozart’s D minor piano concerto (Köchel number 466) by Clara Schumann (c.1853)
Clara Wieck, age 20, just before her and Robert Schumann were married

especially Robert’s prestige in the music world of Romanticism.

On the 29th of July 1856, Robert passes having experienced severe mental deterioration for many years. This left Clara grief but also relief that the suffering of Robrt has ended. Brahms, their student (though he and Clara had a much more intimate relationship) took care of his teachers’ seven children whilst she went on extensive tours to Britain nineteen times, to St James’s Hall and Crystal Palace, and to Russia, France, Austria (and of course, Germany). Her own interpretations of the High Romantic era were embodied in her own annotations – Mozart’s Piano Concerto in D minor performed in 1857 and again in April 1885 and in her advocacy of Robert’s music. In the same month she played the (only) Schumann Piano Concerto in A minor conducted by her half-brother, after which she wrote a letter to Brahms , ‘I think I played

fresher than ever’ – a woman awakening from the sorrows of the death of her family.

Alongside her successful career as a concert pianist, she accepted the prestigious position as the first female piano teacher at Dr Hoch’s Konservatorium in 1878. Her tireless curations of Robert’s legacy and rigorous approaches attracted local and international students. Robert’s belated fame that refined his image as the forgotten genius was a sanitised one – his reputation Clara had put together was unscathed by his psychological struggles and sometimes irascible, contradictory character. However, because of her persistence to guard his residue (as well as economic pressures of being the sole provider to her seven children, and ultimately perpetual self-doubt overshadowing her life), she refrained from composing in her later years.

Looking back towards Clara’s tragic life, we must question what the patriarchal society has made of women of the past. Clara was doubtlessly a good daughter, wife, and mother, but had she considered herself before others she perhaps would’ve been a much more illustrious female virtuoso player of her time. Her editions of R. Schumann received remarkable posthumous fame, yet the glory of her enduring repercussions were largely obscured by Robert’s and by the historical narrative which rendered women’s intellectual and artistic industry inferior.

‘I cannot refrain from my art. If I did, I’d never forgive myself,’– Clara Schumann, autumn 1840.

Pastel of Clara Schumann by Franz von Lenbach, 1878

Is addiction genetic or environmental?

Addiction is defined as the fact or condition of being physically and mentally dependent on a particular substance, and has been classified as a chronic, relapsing brain disease by many large scientific organisations since the late 20th century. There are arguments for both a genetic and an environmental component to addiction, suggesting that both our genes and our upbringing and surroundings play a large role in the possibility of becoming addicted to a certain substance.

Addiction occurs through the influence of our mental reward system. This system was advantageous to early mankind since it controlled the release of dopamine, a neurotransmitter responsible for creating a feeling of pleasure in humans and increasing motivation via this pleasure to repeat certain processes. For example, when early humans ate,

the mental reward system triggered the release of dopamine, causing a human to feel pleasure and therefore be more likely to eat again sooner. Addictive substances trigger an outsized response, flooding the reward pathway with around 10 times more dopamine than a natural response. The brain remembers this response and associates it with the addictive substance, but over time, chronic use of the substance causes the brain to become less sensitive to dopamine. It becomes increasingly important, but also increasingly difficult, to achieve the same initial level of pleasure, leading to overuse of the substance and to a strengthening addiction.

There is extensive study of the genetic component of addiction, a theory which centres around particular alleles of genes which make us more or less receptive to certain substances, or

increase the severity of withdrawal symptoms from certain substances. Whilst some studies exist that use investigation into and comparison of humans, most are performed on animals in controlled lab settings, often on mice who have similar reward systems to us. A few genetic differences that contribute to addiction that have been found are: missing the mGluR2 receptor - this variation adds both risk and protection. Rats without it have been found to get fewer rewarding effects from cocaine, meaning that they are less likely to work to receive it, but also that if it is available, they will take disproportionately large amounts to achieve the same result. Two copies of the CHRNA5 gene - this gene codes for a protein that helps cells sense nicotine, and people with these are twice as likely to be nicotine dependent. Mpdz – mice that make more protein from this gene experience less severe withdrawal symptoms from alcohol and sedative-hypnotic drugs like barbiturates. A protective variation of the alcohol dehydrogenase 2 gene – this gene codes for a protein that doesn’t break down alcohol the way most people do, leading to nausea, flushing, rapid heartbeat and headache when people drink, making inebriation more unpleasant and decreasing risk of alcohol dependence. There are further conditions that affect alcohol consumption under stress, opioid addiction, and even sensitivity to cocaine caused by a gene that makes a protein responsible for learning and memory.

All of these studies point towards a significant genetic factor that contributes to the risk of addiction, caused by increased sensitivity to or different reactions to a substance. Many of these genetic conditions are also hereditary, meaning that addiction many be passed down families as a common trait. However, predisposition to addiction to certain substances can only account for a certain amount of the chance that a person becomes an addict, which is suggested to be around 4060%, leaving a large portion down to environmental factors.

Environmental factors that contribute to the likelihood of addiction include the people who surround us at different stages of our lives, our socio-economic status, the places we live and our life experiences. The people around us influence us in multiple different ways – family is probably the most significant factor that contributes to likelihood of addiction. High stress as a child or young adult due to unhappiness in the home life often causes people to turn to substance abuse to resolve problems they feel they cannot resolve at home. Problems like divorce often leave children feeling anxious and lonely, and they may addict themselves to certain substances as a method of coping with this stress. Influences from the media also fall in this category. Higher levels of advertisement for alcohol across all forms of media have strong links with increased consumption, especially in young people.

Socio-economic status is probably the most powerful environmental factor that affects addiction – studies by the National Health Institute of the USA have found that different socio-economic categories cause different problems concerning addiction. People with low SES typically have increased availability of drugs and other substances in the areas where they live, have drug dealers in their immediate social circles who may be perceived as role models, and are under financial pressure, increasing stress and thereby the likelihood they will turn to substance abuse. People with medium and high SES typically face increased social rather than economic demands, with social lives, jobs and school all causing significant stress that links to addiction. An increase financially allows not only the support of an addiction but the support of an expensive one – while lower SES groups were found to have higher nicotine addiction rates, middle or high SES groups were found to have higher marijuana, alcohol and cocaine abuse rates. The increased potential for college education that comes with higher SES also brings independence from parents and family, which encourages substance abuse. However,

individuals of higher SES also have better access to rehabilitation programmes and treatment due to their increased financial means, allowing for easier intervention and the prevention of permanent or long lasting addiction in certain situations. Location affects likelihood of addiction through availability of certain substances as well as stress caused by factors particular to a location. Urban locations were found by the NIH to have far greater rates of substance abuse due to increased availability of drugs and other substances in urban environments, but also to have increased rates of rehabilitation as the centres are predominantly in cities. In contrast, rural areas were found to have overall lower rates of drug use, but still high rates of methamphetamine, tobacco, and non-heroin opioid use, as a result of the increased average SES for some specific rural areas.

There are clearly various underlying causes of addiction within each of the two categories. Evidently, further research, especially into the

genetic component of addiction, is needed before any conclusions can be reached within this debate. However, with the current evidence, it seems fair to conclude that genetics and environment are close to equal in their contribution to likelihood of addiction for a person. The significance of environment mostly lying in the availability and reputation of addictive substances within a community or location, and the significance of genetics affecting the effects of a substance or withdrawal from it on an individual’s body and mind. It is important to note that the influence of each of these factors on any one individual may be completely different from the next, and that a large component of addiction comes from individual differences and lifestyle choices. Whilst both environment and genetics strongly contribute, addiction is a complex and challenging problem and that we cannot examine from just these two viewpoints if we want to truly understand it.

The Science of Dreams

The time we spend dreaming totals up to around six years of our lives. Dreams are in the intersection of biology and psychology, where the brain’s physical activity can generate emotional narratives. This article will explore the mechanics, common symbols, and the bizarre ways dreams impact our world.

There are four stages to our sleep cycle, the first three stages consist of non-rapid eye movement sleep. The first stage is the transition from being awake to a light sleep which only lasts a few minutes. Then the second stage is a slightly deeper sleep where the body starts to regulate its breathing and heart rate and its temperature drops. During the third stage, deep sleep occurs which is when the body recovers and repairs tissue. The fourth stage of sleep is known as REM (rapid eye movement) sleep, this normally takes place 90 minutes after falling asleep. It is characterized by intense brain

activity, vivid dreaming and atonia. Antonia being a temporary paralysis of the skeletal muscles, it is a safety feature of sleeping which prevents individuals from acting out dreams and causing injury.

Have you ever woken up from a dream and thought ‘that made no sense?’ Well, this isn’t you going crazy, there is a scientific explanation to this chaos. During REM sleep your prefrontal cortex shuts off, the prefrontal cortex is normally responsible for logic and impulse control. Usually this acts as a reality filter, for example if you saw a flying pig, it would tell your brain this is impossible. But during sleep this filter is off so your brain will accept the wildest scenarios as total reality. While the logic centre is off, the amygdala, the emotional part of the brain, can be up to 30% more active than when you are awake. Therefore, when you are dreaming everything will feel at higher stakes and more emotional. Both of these put together

can make dreams feel like a vivid emotional roller coaster where anything can happen. Have you ever woken up and couldn’t remember your dream even though you knew you had one? As soon as you wake up your prefrontal cortex turns on and immediately realises that everything that you dreamed was nonsensical, physics defying, junk data and tries to delete it. So, unless you write your dreams down in a log your brain will wipe them from your memory. On average you forget 50% of your dreams in the first five minutes after waking up and 90% after ten minutes.

During dreams, the brain is unable to make up new faces. Everyone you see in your dream is actually from a huge library of faces, from your close family to someone you would have seen in a shop 10 years ago, they are all real people.

Sigmund Freud’s theory was that dreams are the disguised fulfilment of repressed wishes. He believed that dreams protect sleep from disruption, meaning if a disturbing thought arises, dreams can disguise it. This allows the dreamer to continue sleeping rather than wake up due to anxiety. Freud distinguished between two levels of dream content, the manifest content which is the story line as it is told, compared to the latent content which is the true hidden meanings of dreams. This is harder to unravel since it is hidden within the dreams in insignificant details.

Unlike Freud’s theory, Carl Jung believed dreams are natural, honest messages from the unconscious, intended to balance the conscious mind, a concept called compensation. Instead of hiding secret desires, dreams reveal what is missing in your waking life, highlighting ignored emotions or perspectives to help you become more whole.

During dreams, there are scenarios that occur often, which may suggest certain things taking place in our lives. For each common dream there are different mental and physical factors that can cause them. Falling in your dream can sometimes be caused when you are falling asleep and your muscles relax, the mind can mistake the loss of muscle tension as falling.

Falling in your dream can also represent a situation in your waking life where you feel out of control, overwhelmed or a fear of failure. Another common symbol is being chased, this is primarily caused by mental factors, which show avoiding a problem or a difficult conversation. Teeth falling out in a dream can be linked to anxiety about communication, but also to teeth grinding. This is because the brain incorporates the real sensation of tension, pressure or pain in the jaw into dreams about losing teeth.

Muscle paralysis during REM sleep can cause mental interpretation within dreams. Often in dreams you can feel paralysed or can scream without making a sound, this is the brains interpretation muscle paralysis. This also plays a part in lucid dreams, this is where you are fully asleep but are aware you are dreaming, often allowing you to control the dream’s narrative. This hybrid state happens when areas of the brain responsible for self-awareness and critical thinking reactivate, which make you realise you are dreaming without waking up.

The dream lag effect is when memories people and events form waking life appear in your dreams but not immediately, typically peaking around five to seven days later. Researchers hypothesize this lag is because after a longer time period it is easier to consolidate, organize, and stabilize new memories into long-term storage which is also more associated with personal events then mundane daily activities.

Sleepwalking and talking are parasomnias which occur during stage three of sleep, before muscle paralysis. They involve acting out, moving, or talking without awareness, often triggered by stress, sleep deprivation, fever, or alcohol. Individuals normally have no memory of these.

Ultimately dreams are far more than junk data or electrical firing, dreams are a mixture of biological maintenance and personal storytelling. While our bodies remain unresponsive during REM sleep, our minds are cinemas which consolidate memories and show our anxieties.

Does the Mathematical Structure of Nature Challenge Strict Naturalism?

‘The unreasonable effectiveness of mathematics in the natural sciences is something bordering on the mysterious, and there is no rational explanation for it’. As Eugene Wigner pondered in his 1960 essay, why does mathematics, a product of human thought, describe the physical universe so well? Natural phenomena from shells to snowflakes, from minerals to leaves, from butterfly wings to human DNA are determined by patterns and sequences that are essentially mathematical. Wigner revealed that nature does not fit into simple categorisation. Natural phenomena do not always occur predictably even though they are constrained by mathematical order. Processes such as radioactive decay follow probabilistic rather than deterministic laws. This highlights that both mathematical order and unpredictability coexist within the natural world. Wigner emphasises that abstract mathematical structures can describe uncertainty. However, it remains unclear how both order and randomness are captured through these structures. Therefore, can mathematical ability be fully explained within strict metaphysical naturalism, or are broader metaphysical interpretations needed?

The mathematical structure of nature can be seen in the recurring patterns, symmetries and numerical sequences that govern all living systems. Universal patterns such as the Fibonacci sequence in shell spirals and flower petals, fractals in trees or coastlines, and symmetry in snowflakes and butterflies are often linked to mathematical principles like the Golden Ratio. This Ratio, approximately 1.618, represents a special proportional relationship. It occurs when the sum of two lengths divided by

the larger one is equal to the larger length divided by the smaller one. This is important as it represents deep mathematical balance shown throughout nature. Interestingly, although beauty is considered subjective, sublimity in nature, art and the human form is often linked to symmetry, proportion and the golden ratio – all objective mathematical principles. This mathematical structure is also evident in physical reality. By linking conservation laws to symmetry for instance, Emmy Noether showed that mathematics dictates essential features of physical law. Her theorem says that conservation laws come from symmetries in time, space, and rotation. This means that mathematics is not just a way to describe the universe, it actually shapes the basic rules that govern it.

When considering the philosophical implications, metaphysical (or ontological) naturalism holds that reality consists solely of natural or physical elements, rejecting the existence of anything supernatural. From this perspective, the effectiveness of mathematics isn’t strange and can be understood through evolutionary development and practical usefulness in evolution. Mathematics did not

The fibonacci sequence

begin in its current abstract form; rather, it originated as a tool for survival in the form of a “number sense”, the ability to approximate quantities. Monkeys, for example, still demonstrate this ability, and scientists have compared their numerical cognition to that of humans. Early humans relied on this skill to estimate quantities such as food sources. Over time, as survival pressures changed, humans developed more advanced symbolic systems from these basic abilities. Naturalists argue that experience is the foundation of abstract mathematics, which gradually grew out of interaction with the physical world. Mathematics works today, according to this view, because humans evolved within a lawgoverned universe and refined systems that accurately describe it.

This process resembles the idea in the novel Divergent by Veronica Roth: even when humans attempt to categorize phenomena or impose rigid systems, unpredictability and divergence remain. Just as how the individuals in the book cannot be perfectly classified into a single faction, mathematical discovery often transcends strict rules or prior expectations. Structured systems coexist with creativity, exploration, and unexpected insights, reflecting the interplay of order and unpredictability in both human cognition and the natural world.

Naturalism has several strengths: avoiding supernatural entities as an answer to unknown questions. It also aligns closely with the scientific worldview, offering a plausible evolutionary explanation for mathematical cognition and provides an account of why mathematics works as a practical tool for modelling reality. However, there are also serious challenges. Mathematical structures, like numbers and sets, seem abstract and nonphysical, contrasting against naturalism about

how they fit into a purely physical universe. At the same time, highly theoretical mathematics, often developed with no practical purpose in mind, regularly predict physical phenomena that weren’t known beforehand. If mathematics was just a human invention, it’s hard to explain why it works so well in describing the world. Finally, the sense that mathematical truths are discovered rather than invented suggests that naturalism alone may struggle to fully explain the universality and necessity of mathematics. Theism offers another explanation, linking the mathematical structure of the universe to a divine power. Due to God being rational, it follows everything he made follows strict rules and patterns to make logical sense. What we notice in nature’s rules and fixed numbers comes from that clear and rational thinking. Mathematics, therefore, becomes the DNA of the universe. It follows that’s humans would understand maths since they are made in God’s image and think in a similar way. The theologian Augustine of Hippo (354430AD) believed that mathematics was an eternal and unchanging truth deeply rooted in the mind of God. From this perspective, mathematics is not a human construct, but instead an underlying, consistent and divine intelligence that, in the words of William Wordsworth, ‘rolls through all things.’ Theism therefore addresses what naturalism cannot easily explain, like why abstract maths feels so absolute and widespread. Predictions made through pure theory sometimes match real events long before they are seen; naturalism cannot explain this alone. When numbers and logic live in the awareness of a reasoning divine presence, their reach across time and space stops feeling odd. We grasp them because we tap into that same orderly structure, opening paths where pattern meets imagination and our belief grows foundations.

The mathematical structure of nature therefore raises an important philosophical challenge for strict metaphysical naturalism. While naturalism provides a coherent and scientifically grounded explanation of how

mathematical cognition may have evolved and why it functions as a successful tool for modelling reality, questions remain about whether this account is complete. Mathematical entities such as numbers, sets, and symmetries appear abstract and non-physical, yet they describe the physical universe with striking precision. The predictive success of highly theoretical mathematics, often developed without practical application in mind, suggests that mathematics is not merely a convenient human invention but somehow deeply connected to the structure of reality itself. From a strictly naturalistic standpoint, this connection is explained as a product of evolutionary adaptation within a law-governed universe; however, this does not fully explain why mathematical truths seem necessary, universal, and discoverable rather than invented.

Theism, by contrast, offers a broader metaphysical framework in which the rational structure of the universe reflects a rational source. Thinkers such as Augustine of Hippo argued that eternal mathematical truths exist within the divine intellect, providing an explanation for their constancy and objectivity. Within this perspective, the effectiveness of

mathematics is not mysterious but expected: a rational universe created by a rational God would naturally exhibit order, symmetry, and intelligibility. Yet this view does not decisively prove the existence of God; rather, it presents an alternative account that attempts to address aspects of mathematical reality that naturalism finds difficult to explain.

Ultimately, the coexistence of mathematical order and unpredictability in nature, from probabilistic processes such as radioactive decay to the elegant symmetries described by Noether’s theorem, suggests that reality resists being reduced to a single explanatory framework. Just as divergence disrupts rigid classification in the novel Divergent, the mathematical universe resists confinement within purely material explanations. Strict naturalism offers powerful insights but may not yet provide a fully satisfactory account of why mathematics is so universally and, in Wigner’s words ‘unreasonably effective’. Acknowledging this does not settle the debate, but it keeps philosophical inquiry open, inviting deeper reflection on whether the structure of nature points beyond itself or simply reveals the limits of our current understanding.

Is climate change an unsolvable collective action problem?

Although humans have been aware of climate change for over a century, it is not until recently that we have made any effort to try and mitigate its effects. Joseph Fourier first proposed the idea of the greenhouse effect in the 1820s, but over a century had passed by the time that people started to notice the actual impact of this discovery in driving our warming climate in the 1930s. Following this potentially threatening discovery you would expect scientists to focus much research into overcoming the danger, but since climate change has never been the most imminent problem, this is not the reality.

It wasn’t until 1979 that the first world climate conference took place, by which point the global average temperature had already risen by 0.4C (almost 1/3 of the total warming we experience today), and had been steadily rising since around 1930. With a delay of several decades between the release of greenhouse gases and an observed increase in temperature, we have already set ourselves up to face further warming regardless of our responses in the future. Since this conference in 1979, there have been dozens of international conventions in an increasingly urgent attempt to slow this warming before it is too late. However, despite our best efforts, very little progress seems to have been made. This

leads me to suggest that relying on international governance and policies is not the most effective way forward; maybe our focus needs to be less on united international efforts and more on what are currently less well funded and underestimated forms of new technology and ideas, which have the potential to make a large difference if they are given the chance to scale up. Currently our main focus is on using top down strategies to mitigate climate change, whereas the real change will come only when we start using a bottom up management as well. The difficulty with this is shifting government focus from relying on policies to drive change to concentrating on supporting and funding individual companies and initiatives, which will help them to get off the ground so they can become self-supporting and economically viable.

Since 1979 many international conferences have taken place with the aim of reducing or halting climate change, with several policies and agreements being developed. The Climate Change Act 2008 was implemented in the UK – the first legally binding national climate change policy. This meant the UK was theoretically held to their promises to protect the environment under the law. The act requires the UK to reach net zero greenhouse gas emissions by 2050 and aims for 95% of electricity to be low carbon by 2030, along with a ban on the manufacturing of new petrol and diesel cars by 2030. Furthermore, the government stated that it would make new investments in carbon capture technology and introduce a 5-year carbon budget to combat the greenhouse effect. But although these aims are great end goals, there is very little support and funding in place to help reach them, and without small, achievable stepping stones outlining a strategy it is unclear how, as a

country, we should reach these end goals. This means that companies which needed to respond to the act are left unsure of how to make progress, with insufficient funding, and trying to achieve what feels like an impossible aim.

With only 4 years left to go until the first of these deadlines, we are on track to meet very few of these targets. Progress towards net zero greenhouse gas emissions must occur twice as fast as it has since 2008 if we are to reach our goal of 2050. Despite the contract being legally binding there is very little we can do to enforce it – and herein lies the main problem with combatting climate change by relying on governmental policies. When targets are constantly shifting it becomes impossible to know what you’re aiming for, such as the target to ban new petrol and diesel cars by 2030, which was originally set for 2040 when it was released in 2017. Since then, it has undergone 5 policy changes in the 7 years following. These unstable goals create a lack of clarity and immutability, meaning a common belief develops that if you fail to reach these targets, you can just push them back even further. However, climate change won’t keep waiting for us and we can’t keep pushing it back anymore.

I believe the way to tackle climate change is not on an international scale, where management and progress is dictated by policies and people who have little knowledge of industries that could have a true influence, but by changing our focus to developing companies and industries that have the potential to pave the way to change. Much of the technology and knowledge that we need to overcome climate change already exists but cannot be scaled up without more focus and investment, to have an impact on a global scale. My alternative suggestion would be to change government policies to focus on the growth of individual, environmental companies and provide the investments they need to grow. The changes we need to overcome climate change will follow.

Afforestation and regeneration of forests is an essential factor in overcoming climate change. One company which is focused on addressing

this is re.green. They aim to rejuvenate and afforest the Atlantic forest which is found running through Brazil, Argentina and Paraguay, and has been reduced to less than 12% of its original size in the past century. Brazil is home to 60% of the world’s remaining forests and acts as a massive carbon sink, supporting a wide range of biodiversity, including many unique species such as jaguars, toucans and sloths. By maintaining and replacing forests in Brazil re.green are helping to ensure a safer future for these rare species. Re.green aims to make regeneration of forests more economically viable and accessible through the use of AI, drones and satellite imaging, in conjunction with ecological and financial data to quickly and effectively identify the areas of land with the most potential for restoration. These new developments in technology allow afforestation to occur on a scale at which it has never been undertaken before, and the globalisation of these methods has the potential to support a greener future for our planet. Since re.green was founded in 2021 they have maintained over 12,000 hectares of forest via active restoration and they have planted more than 6 million seedlings, with a target of 65 million by 2032. Re.green has been so effective in their initiative by working closely with local communities, spreading the influence of their message and educating locals to get more people actively involved in restoration, teaching them how to build sustainable livelihoods from the forest and training them in how to carry out effective conservation. Re.green’s long term goal is to restore 1 million hectares of forest which will capture 15 million tonnes of CO² every year. Another innovative idea to increase the greenery and air quality in our cities is through the introduction of vertical forests, which integrate nature back into our cities, without putting demand on increasingly limited space. Vertical forests remove CO² from the air via photosynthesis and therefore improve the air quality in urban areas, improve biodiversity, and in many cases have been shown to boost human health and happiness. These forests are already taking off in several major cities, including

Singapore and Taipei, in Taiwan. Most famous is Vincent Callebaut’s Tao Zhu Yin Yuan in Taipei, a 21-floor tower shaped like a double helix, and covered in vertical forests. Its 23,000 plants absorb an estimated 130 tons of CO² each year, and their cooling effect on the building reduces the need for air conditioning by 30%, reducing energy demands and greenhouse gas emissions.

However, even if we are able to restore forests to all of our feasible, degraded land worldwide, this would still only be enough to absorb between 10-15 billion tonnes of CO² each year, around 25-40% of the total emissions. This shows the importance of using many different approaches in conjunction.

Global tipping points mean that carbon capture will be one of the most essential technologies in combatting climate change, as without removing any of the current CO² which is in our atmosphere, the world is already subject to 50 further years of delayed warming,

which may carry us beyond the point of no return. One company working on an innovative way to reduce the CO² in our atmosphere is Captura. Captura’s aim is to maximise the natural potential of the ocean as a carbon sink by removing dissolved CO² from the ocean, allowing more to be dissolved. Our oceans naturally have a pH of around 8 which means that most carbon dissolved into the water exists as bicarbonate or carbonate. The process of removing CO2 from the ocean involves a sample of water being pumped into a system which splits the seawater into an acidic and a basic stream, using electrodialysis. The acidic stream is then added back to the main quantity of sea water, which lowers the pH. When the pH is decreased a chemical reaction occurs between the carbonate and the acid to form aqueous CO² which can then be extracted from the water and either permanently, geologically stored or used as a synthetic fuel. Once the CO² is removed from the water it allows more atmospheric CO² to be dissolved. The main issue with this strategy is that a significant energy input is required to carry out electrodialysis, and so the energy input must be carbon neutral or else the emission of the process itself would outweigh the CO² captured as a result. In contrast, there are several advantages to this technique, including the mitigation of ocean acidification (as the slightly alkaline stream neutralises the acidified water, which benefits many marine species), scalability, and a lack of consumables as the only reactants required are already found in the ocean water.

The final industry in which I believe development is imperative to combat climate change is the production of renewable and green energy. Green hydrogen is a form of hydrogen produced without releasing any carbon dioxide and it is used as a fuel with water as its only biproduct, giving it an overall carbon footprint of zero. Firstly, to produce green hydrogen you must manufacture carbon neutral electricity, the main sources of which are hydropower, wind-power and solar-power. This electricity can then be used in electrolysis

to separate water into hydrogen and oxygen. According to the IEA, using renewable energy for this process will reduce carbon dioxide emissions by 830 million tonnes a year, globally. Green hydrogen is a particularly beneficial fuel as in addition to being carbon neutral, it can be stored, unlike many forms of renewable energy, in the form of hydrogen fuel cells. Since hydrogen production is very energy intensive, this process is often carried out when excess renewable energy is available and would otherwise be wasted. Also finding solutions to the storage dilemma, Australia is currently offering all residents 3 hours of free electricity every day during peak daylight hours. Since Australia is producing excess solar energy during daylight hours the government encourages people to use energy whilst it is abundant, rather than relying on non-renewable sources at other

times, by subsidising it, or making it free, negating the need for storage.

Climate change is not an unsolvable collective action problem, but it cannot be solved through international agreements and shifting government targets alone. While global cooperation is important, real and lasting progress will come from scaling multiple solutions simultaneously – from afforestation and urban greening to carbon capture and green hydrogen. No single strategy is sufficient in isolation, yet when developed in conjunction and supported through focused investment, their combined impact becomes significant. Rather than relying primarily on unstable political policies, governments should prioritise enabling and funding the industries capable of delivering measurable change, transforming climate action from a matter of negotiation into one of action.

The Promise of Superconductivity

Superconductivity has the potential to reshape the way our world works. A subsection of quantum physics, it attempts to explain the behaviour of superconductors; materials which under special conditions are able to conduct electricity whilst experiencing zero resistance and simultaneously expelling its own magnetic field. Since its discovery by Heike Kamerlingh Onnes in 1911, this phenomenon has developed from a simple idea to being at the forefront of modern technology, finding its place in the foundations of new-age technological advancements. Improving our understanding of these materials and achieving the special conditions required for their existence with greater efficiency would reshape the functionality of our world entirely.

Discovery and theoretical framework

This was first observed by Dutch physicist Heike Kamerlingh Honnes, as upon submerging mercury in liquid helium, the resistance of mercury absolved whilst its conductivity dramatically increased. After this, Honnes published numerous articles depicting his discovery of a ‘New State of Mercury’, later winning the 1913 Nobel prize in physics for his findings. However, it was not until 1957 when American physicists Bardeen, Cooper and Schrieffer formulated the BCS theory which demonstrated that superconductivity arises due to the formation of Cooper Pairs.

In normal conductors, as electricity is passed through in the form of flowing currentcarrying electrons, collisions with the ions in the metal occur, generating resistance and causing energy to be lost as heat. This is due to the vibrations of ions in conductors called phonons. However, at temperatures nearing absolute 0K, these vibrations begin to slow and collisions become increasingly infrequent.

Electrons in the crystal lattice of the element are able to attract positive metal ions thus creating a positive region of attraction. Although electrons repel each other due to their like charge, this formation of a positive region allows electrons to interact and form a Cooper Pair. This allows all the electrons to be coordinated and move in coherence with each other as their own quantum state, preventing them scattering and colliding into metal ions as individual electrons would.

Meissner effect

It was also later discovered that upon entering this superconductive state, superconductors also display interesting traits regarding their magnetism. Like other conductors, in the presence of a magnetic field, an induced current is created due to the interaction between the field of the magnet and the field of electrons located inside the conductor. This is known as the superconductor’s supercurrent. The supercurrent flowing inside the superconductor expels its own magnetic field which opposes incoming magnetic fields which surround it. This effect has two different applications as it has different results depending on the type of conductor:

In type 1 superconductors, upon receiving the induced current superconductivity is achieved and the material expels its own magnetic field. This applies up until the field becomes too strong such that superconductivity collapses meaning that electrical resistance returns and the Meissner effect ceases.

In type 2 superconductors, rather than completely expelling the magnetic field, some of it is retained within the material within tube-like structures called vortices. As a result of this, the material can be superconductive and withhold its magnetic field simultaneously.

This, therefore, allows the material to withstand greater magnetic fields without ceasing its superconductivity.

High temperature super conductors

Further on from conventional superconductors was the discovery that copperoxide ceramics possessed superconductivity at temperatures more than 30K more than conventional superconductors. Thus, rather than having to be cooled with liquid helium, they could instead use liquid nitrogen as the coolant. The production of liquid nitrogen requires far less energy and is much easier to handle than its counterpart. The only pitfall in this regard is that copper-oxide ceramics are a brittle material and have poor ductility, meaning that forming wires or useful structures from it is either difficult or simply not feasible. Recent research has found hydrogen-rich compounds named hydrides exhibit superconductive behaviour at temperatures close to room temperature. In 2015, a study on sulphur hydride outlined that this compound expressed superconductivity at temperatures around 200K. However, this was only achieved under conditions of immense pressure. For comparison, the pressure required in this experiment was approximately 1.5 million times greater than the atmospheric pressure on Earth. Obtaining this pressure for extended periods is also costly and difficult to achieve.

Current and future applications of superconductors.

Superconductors already have key applications within our society. In medicine superconducting magnets are essential in the functionality of Magnetic Resonance Imaging (MRI) machines allowing for the viewing of internal structures. By maintaining a large magnetic field that does not experience energy loss, precise images are able to be taken of deep internal structures. In transportation, superconducting magnets are used to levitate trains allowing for high speed and frictionless travel. This technology is currently being

implemented in Japan in the form of SC Maglev train, which aims to connect Tokyo to Nagoya in a 40-minute period by achieving speeds exceeding 600 kilometres per hour. Finally, from the theoretical physics perspective, superconductors have led to numerous discoveries in particle physics such as the Higgs boson. This was solely achieved due to the Large Hadron Collider in Geneva which intentionally collides different sub-atomic particles using thousands of superconductive magnets. This facility continues to carry out experiments which contribute to the everprogressing knowledge of quantum mechanics.

However, if we were to develop superconductors which became superconducting at temperatures closer to room temperature, the use of this technology would be on a world-wide scale. Room temperature superconductors would dramatically improve global energy efficiency by eliminating electrical resistance in power transmission and electronic circuits. Without energy lost as heat, power grids could operate far more sustainably, reducing waste and carbon emissions. This would significantly lower electricity costs while supporting growing worldwide energy demands.

Unfortunately, at present, this idea remains unachievable and the majority of recent reports regarding room temperature superconductors having been redacted due to claims of academic dishonesty. The small minority otherwise still require extreme conditions to remain superconductive. Professor JC Séamus Davis from the University of Oxford outlined that ‘This has been one of the Holy Grails of problems in physics research for nearly 40 years. Many people believe that cheap, readily available room-temperature superconductors would be as revolutionary for the human civilization as the introduction of electricity itself.’

Does religion promote peace or conflict?

For thousands of years, religion has been one of the most powerful forces in shaping human history dating much further back than known history (over 100,000 years ago). From the earliest ancient civilisations, religious beliefs have had influences on laws, cultures, moral systems and entire empires. The major world religions – Christianity, Islam, Hinduism and Buddhism have helped to guide billions of people in understanding and coming to peace with life, death, suffering, and the purpose of existence. Common religious beliefs often include faith in a higher power or divine being, which gives the moral codes to people, tending to encourage compassion, justice, and faith that if they follow these rules they will end up in heaven. Religion has inspired great achievements, including art, charity, education and movements for social reform. At the same time however, it has also led to wars, persecution and disputes between communities. Because religion has led to both unity and conflict throughout history, it

remains a debatable subject, with the controversial question being asked, whether religion ultimately promotes peace, or if it ends up leading to more unnecessary conflict. Despite religion’s moral teachings ultimately aiming towards peace, history suggests that conflict has often been arisen to arguably a greater extent. A famous example of long-lasting conflict is the Crusades; a series of wars fought in medieval times between Christians and Muslims over control of the holy lands (Jerusalem and some surrounding areas). Both Christians and Muslims believed that these lands should be in the power of their religion, and the lack of reasoning and beliefs how the control of these lands is important led to wars over hundreds of years. Although these were said to be sacred missions, the Crusades and religious wars resulted in mass violence, political instability and long-lasting hostility between religious groups. Similarly, throughout Europe, conflicts such as the Thirty Years’ War devastated regions, motivated partially by tensions between different denominations of

the singular religion Christianity, between the Catholic and Protestant states.

As well as large-scale wars, religion has also contributed to persecution and discrimination. Through various periods of history, individuals were punished and often executed for heresy (any opinion, belief, or action that strongly disagrees with official and commonly accepted beliefs), apostasy (abandoning one’s religion), or practicing a different faith. Laws in certain societies have imposed severe penalties, including death, for converting away from the officially appointed religion given by the ruler of the country, or for holding beliefs considered blasphemous.

In contrast, it can be argued that religion itself is misunderstood during these occasions of violence, as every religion promotes pacifism as the ultimate goal, with violence only inhibited for some religions in specific occasions such as protection of the religion or of oneself. Sacred texts are frequently taken out of historical context or used to justify violence and maintain power. Political ambitions, territorial expansion, and economic interests are often underestimated due to people only looking at the religious aspects involved in the reasons of the war, making conflicts appear solely due to religion when they aren’t. Regardless, without religious reasons of some sort, many of these wars would not have occurred in the same form or magnitude, or even at all, as faith provided a powerful motive which separated many communities into believers and non-believers. Leading to a strong sense of identity (which when combined with fear or extremism) religion has repeatedly intensified tensions or disputes. Therefore, while religion teaches peace in theory, its historical misuse has undeniably contributed to significant conflict and suffering. Although religion has been linked to violence, it is equally important to recognise the ways in which it has promoted peace throughout history. At the heart of most major religions, they teach compassion, forgiveness, and respect for others. Christianity emphasises

loving one’s neighbour as one would love themselves, and forgiving enemies, teaching them to not respond to violence with violence but to turn the other cheek. Islam teaches mercy and charity, with violence only being permitted when protecting oneself or defending the religion, this being called lesser jihad. Buddhism focuses on pacifism and reducing suffering, and Hinduism promotes the principle of non-harm. These moral teachings have shaped ethical systems that encourage cooperation and love rather than conflict. For millions of believers, religion provides guidance on how to live peacefully within families and societies and gives people something to believe in when they feel they have nothing else. In addition to charities and other organisations which aim to provide assistance, the day-to-day actions of religious believers are not recorded thus neglected. Religious believers tend to aim to live a kind, helpful and humble lifestyle, fraying away from arguments which can be easily avoided or are unnecessary and therefore reducing conflict. Religion has also inspired peace movements and social reform. Leaders such as Mahatma Gandhi drew upon Hindu principles of nonviolence to resist the oppressing British rule peacefully, while Martin Luther King Jr. used Christian teachings to advocate for civil rights through nonviolent protests, leading to justice and preventing racial discrimination – for the greater good. In addition to these events, religious organisations around the world run charities, hospitals, schools and disaster relief programs, helping people regardless of their background or beliefs. These actions demonstrate a religion’s ability to unite communities in service and compassion through selfless acts of religious believers. To conclude, religions have played a complex and powerful role throughout human history, contributing to both widespread peace, and significant conflict. While it is undeniable that wars, persecution and discrimination have been carried out in the name of faith, these events often reflect human greed, fear and political manipulation rather than the true

teachings of religion itself. At the same time, the core principles of the most major religions emphasise forgiveness, non-violent justice, and respect for others. These values have inspired social rehabilitation movements, charitable organisations and peaceful resistance that have positively shaped societies around the world. On paper, religion has led to vast amounts more of death, violence and destruction than lives it has saved. In spite of this, peace is not entirely based on statistics, but may be perceived within day-to-day actions, lifestyles and personalities of religious believers, whose aim is to spread kindness, comfort and happiness. Whether religion leads to more peace or more conflict

is an entirely subjective matter with no right answer, relying more on a person’s views on the matter. However ultimately religion is not purely violent or peaceful, in fact its impact depends largely on how it is interpreted and practiced. When misunderstood or exploited, it can divide people and justify harm; when followed the right way with sincerity, it has the potential to unite communities, promoting morality and reconciliation. Therefore, rather than asking whether religion promotes peace or conflict, it may be more logical to consider how humanity chooses to use religion – as an excuse for division and power, or as a foundation for harmony and community.

How We Decide: The Neurobiological Foundations of Human Choice

From deciding between lunch options to picking a career, human existence is governed by an endless stream of choices. Each decision, no matter how trivial, triggers a complex neurobiological event. Whilst to us, many decisions feel intuitive or automatic, the underlying neural mechanisms controlling these choices are remarkably intricate. They involve a dynamic interplay between executive control processes, emotional evaluation, stress responses, and environmental factors. To fully appreciate the science behind decision-making, it is essential to explore the neuroanatomy, neurochemical pathways and the individual differences in decision styles. Greater understanding of these is driving advancements ranging from the clinical implications arising from dysfunctions in these systems such as depression and substance abuse, to optimising rational decision making in leadership.

The Neural Architecture of Decision-making

At the heart of decision-making lies a distributed yet interconnected network of brain regions that collectively process cognitive, emotional, and motivational information. The prefrontal cortex (PFC) stands out as the executive hub of this network, often described as the brain’s “command centre” for planning, evaluating options, and controlling impulses. Within the PFC, the dorsolateral prefrontal cortex (dlPFC) plays a pivotal role in managing working memory and applying long-term goals to guide decisions (Kable & Glimcher, 2009; Lin et al., 2025). This region enables you to hold multiple potential outcomes in mind, weigh their possible consequences, and exert control to inhibit impulsive responses.

Complementing the dlPFC is the ventromedial prefrontal cortex (vmPFC) and the orbitofrontal cortex (OFC), which are critically involved in value representation and emotional integration. These areas encode the subjective value of different options by synthesizing sensory inputs and emotional signals, thus influencing preferences and choices (Acconito et al., 2023; Lin et al., 2025). For example, when choosing between two job offers, the vmPFC integrates not just the salary information but also emotional factors such as company culture, location, and personal aspirations.

Subcortical structures, including the striatum, a region of the brain which controls multiple aspects of cognition, provide essential contributions to this interconnected network. The ventral striatum is central to reward anticipation and motivation, and reinforcement learning. It is here that dopamine release signals reward prediction errors, the difference between expected and actual outcomes, which are instrumental for learning and adapting preferences over time (Lin et al., 2025). This dopamine mediated system explains why a pleasant surprise can reinforce certain behaviours, whilst unexpected disappointments may discourage them, experiences which influence decision making.

The amygdala adds an emotional dimension to your considerations, particularly in the form of fear or anxiety, modulating the tendency to avoid risky or harmful outcomes. The anterior cingulate cortex (ACC) acts as a monitor, detecting conflicts and errors during decision-making to recalibrate strategies and optimize future choices (Lin et al., 2025). Together, these regions of the brain form a neural risk matrix, balancing the

promotion and inhibition of risky behaviour, the risk reward trade-off, and adapting to the uncertainty inherent in many decisions.

Functional neuroimaging studies have consistently shown that the degree of activity and connectivity among these regions varies depending on the complexity, uncertainty, and stakes involved in the decision. For example, high-risk gambling tasks elicit increased activation in the ACC and ventral striatum, reflecting heightened conflict monitoring and reward evaluation (Acconito et al., 2023; Lin et al., 2025)

Neurochemical dynamics and the impact of substance use

The neural circuitry of decision-making is moderated by a delicate balance of neurotransmitters that influence motivation, reward, and impulse control. Dopamine plays an especially prominent role. In the ventral striatum, dopamine release encodes reward prediction errors, allowing individuals to learn which choices yield better-than-expected outcomes and adjust behaviour accordingly (Lin et al., 2025). This system underpins reinforcement learning and is fundamental for adaptive decision-making.

However, this finely tuned balance can be disrupted by chronic substance use. Experimental research involving rats has demonstrated that cocaine leads to increased impulsivity in decision-making tasks (Shen et al., 2025). Specifically, cocaine impairs the connectivity between the prefrontal cortex and striatum, weakening the brain’s ability to control impulse-based drives. This neurobiological alteration predisposes individuals to favour immediate gratification over long term benefits, perpetuating addictive behaviours. The ramifications extend beyond addiction. Dysregulation of dopamine and related circuits is implicated in various psychiatric conditions characterised by maladaptive decision making, including bipolar disorder, depression, and schizophrenia

(Lin et al., 2025). Understanding how substance use rewires these circuits is crucial for developing targeted interventions that restore balance and improve self-control.

Individual Differences: Decision-Making Styles and Self-Representation

Beyond the core neural architecture, people differ widely in how they approach decisions—a phenomenon captured by the concept of decision-making styles. Neuroscientific research has begun to illuminate the biological underpinnings of these individual differences, suggesting that styles are shaped by how people represent their goals, adapt to change, and tolerate stress (Acconito et al., 2023).

A fundamental component of decisionmaking style is the self-representation of goals. This involves not only identifying what one wants to achieve but also prioritizing those goals and anticipating obstacles. For example, a manager may need to balance competing project deadlines with strategic objectives. Neurophysiological measures such as electroencephalogram (EEG), a scan of the electrical signals between the neurons and the brain and Functional Near-Infrared Spectroscopy (fNIRS), which uses infrared to measure oxygen in the brain, have recently been used to track the brain’s response during goal prioritization, showing modulations in brain wave patterns linked to proactive cognitive control (Acconito et al., 2023). While this area of research is relatively new, it promises to deepen insights into how goal representation shapes decision tendencies.

Adaptability, closely related to cognitive flexibility, is another key determinant of decision-making style. Highly adaptable individuals can swiftly revise mental strategies and behaviours in response to shifting demands or unexpected challenges (Acconito et al., 2023). This flexibility is essential in today’s fastchanging environments and is associated with better risk management and resilience to stress. Proactivity (the ability to initiate change proactively) and innovativeness, or creative

thinking, complement adaptability by fostering leadership and strategic decision-making (Acconito et al., 2023). While the neuroscience of proactivity and innovativeness remains underexplored, early findings suggest these traits shape how decisions are approached in complex, uncertain contexts.

Risk-taking, Stress, and Decision-Making

At the core of many decisions is the trade-off between risk and reward. Risk-taking behaviour is modulated by situational factors, such as time pressure or information overload, as well as intrinsic traits and decision styles (Acconito et al., 2023). Brain regions including the medial PFC, ventral striatum, amygdala, and ACC, as discussed above, play coordinated roles in evaluating risk and uncertainty (Lin et al., 2025).

Stress exerts a powerful influence on decision-making. Acute stress can shift the balance between deliberate, controlled processes and instinctive, habitual responses. For instance, under stress, diminished activity in the dlPFC may reduce executive control, leading to increased risk taking or poorer judgement (Acconito et al., 2023). Conversely, moderate stress, such as a deadline, can sometimes enhance focus and decision accuracy depending on the context. Individual differences in temperament, such as novelty seeking, correlate with neuroendocrine stress responses, indicating that personality traits shape how stress impacts decision processes (Acconito et al., 2023). Repeated or chronic stress can further alter decision-making styles by reinforcing habitual responses over flexible, goal-directed choices. This phenomenon has implications for psychiatric disorders where stress exposure is common, and decision-making is impaired (Lin et al., 2025). Understanding the neurobiological mechanisms linking stress, risk tolerance, and executive control is vital for developing processes that improve decision outcomes in both healthy and clinical populations.

Clinical Implications: Decision-Making in Psychiatric Disorders

Maladaptive decision-making is a hallmark of numerous psychiatric conditions. In bipolar disorder, for example, patients often exhibit an imbalance characterized by heightened ventral striatum activity coupled with reduced dlPFC regulation. This results in a bias toward immediate rewards and impulsive, risky choices (Lin et al., 2025). Similarly, major depressive disorder (MDD) is associated with blunted reward sensitivity in the ventral striatum and altered effort-based decision making, often manifesting as motivational deficits and risk aversion (Lin et al., 2025).

Schizophrenia presents unique challenges with studies showing altered processing of ambiguous versus risky decisions and atypical activation of the orbitofrontal cortex and insula. These neural alterations contribute to impaired decision-making and heightened sensitivity to reward outcomes (Lin et al., 2025). Anxiety disorders also affect decisionmaking, with neural responses in the vmPFC and dlPFC modulated by interventions such as transcranial direct current stimulation, highlighting potential therapeutic targets (Lin et al., 2025). Addiction disorders illustrate how chronic substance use disrupts decision-making circuits. Cocaine use particularly impairs frontostriatal connectivity, promoting impulsive decisions and diminished self-control (Shen et al., 2025). These findings show the importance of treatments that restore neural balance, such as cognitive-behavioural therapy, pharmacological agents targeting dopamine pathways, and neuromodulation techniques.

Towards an Integrated Neuroscientific Framework

The emerging picture of decision-making neuroscience calls for an integrated framework that combines behavioural data, self-report measures, and neurophysiological tools like EEG, fNIRS, and autonomic recordings

(Acconito et al., 2023). Such a multilevel approach can capture the explicit and implicit components of decision styles, emotional responses, and cognitive control.

This comprehensive perspective not only advances theoretical understanding but also offers practical benefits for clinical intervention, as well as leadership development. For example, identifying neurophysiological markers of decision-making styles could help tailor treatments for psychiatric patients or optimize decision strategies in high-stakes professional settings (Acconito et al., 2023).

Conclusion

Decision making is a multifaceted process grounded in a sophisticated neural network

that integrates cognition, emotion, motivation, and stress regulation. Individual decision styles emerge from differences in goal representation, adaptability, risk tolerance, and stress management. Disruptions in these neural systems underlie the maladaptive choices seen in psychiatric disorders, addiction, and stress related impairments.

By deepening our understanding of the neurobiology of choice, integrating behavioural, physiological, and clinical insights, neuroscience paves the way for innovative, personalised interventions. These advances hold promise for improving decision-making capacity, boosting mental health, and ultimately supporting human autonomy in an increasingly complex world.

Guided by the Sun: Solar thermal power plants and their potential to revolutionise renewable energy

Solar thermal power plants (or concentrated solar power plants or CSPs) are huge sunlight powered power plants, a bit like solar farms. Getting their energy from the sun, CSPs use mirrors to focus the light onto a receiver. By doing so, heat can be drawn from a large surface area making the net heat potential very large even if the sunlight hitting one mirror doesn’t create much energy. Although CSPs are a relatively new and innovative form of renewable energy production, they are already relatively wide spread across the world. They are generally being used for two things - in places with high levels of direct sunlight, they are often used for creating electricity, whereas in areas with lower levels of direct sunlight, CSPs can be used for directly heating infrastructure without generating electricity. This includes heating houses or industrial uses, for example, providing heat for desalination which is a growing industry as global warming begins to cause more water shortages. CSPs could be a crucial milestone in renewable energy because of their ability to efficiently store thermal energy for distribution at any given time. This essay will look at the science of CSPs, their relative merits and shortfalls compared to other forms of renewable energy, and their application worldwide.

Parabolic troughs, the most common form of CSP, use a concave mirror to direct sunlight onto a receiver, which is a pipe running down the length of the mirror filled with fluid that is usually thermal oil or molten salts (with extremely high boiling points). These can carry the heat back to a plant where it can be used to heat water and turn a turbine connected to a

generator, creating electricity. The mirrors use heliostats to track the sun for optimising the light that hits the receiver to create as much energy as possible.

Another form of CSPs are solar power towers (also known as central receiver systems) which use huge fields of mirrors to direct light towards a central receiver. The receiver is at the top of a tower, sometimes 260m tall, which has a transfer fluid running through it. In central receiver systems, the transfer fluid is usually a

Parabolic troughs in the Mijave desert (Califirnia)
Central receiver systems in the Mojave desert

molten salt. When the fluid is pumped to the top of the tower, it is heated to between 500 and 1,000°C very quickly, sometimes in only a few seconds, before being pumped back down into the plant. The fluid is either used for heating or electricity or can be stored for up to a week, before being used. The fluid can then be pumped into the system again and reused.

Other forms of CSPs include Fresnel, which use a field of heliostats along an axis to direct light onto a tube directly above, and parallel, to the mirrors, and Dish Stirling, which use a large parabolic dish to concentrate sunlight onto a receiver, directly driving a Stirling engine, which converts thermal energy into mechanical energy.

Solar thermal power plants are attempting to challenge the dominant way of creating energy from sunlight which has, until now, been solar panels. Solar panels convert sunlight directly into electricity without converting it into heat first, unlike CSPs. This means that the

conversion process is simpler but it also means that expensive batteries must be used to store the electricity. Alternatively, in a CSP, as the energy can be stored as heat (thermal energy), this negates the need for batteries. Instead, the hot fluid can be stored in insulated tanks for up to a week, with minimal energy loss. This heat can then be drawn upon and used whenever its needed, for example, during the night or on cloudy days. These insulated tanks cost much less than the batteries that would be needed to store the energy from an entire solar farm. Furthermore, even more sophisticated and efficient methods of heat storage are being developed. For example, phase change materials absorb energy when melting and release it when freezing while staying at a near constant temperature allowing heat to be stored for longer periods of time. Similarly, thermochemical storage stores energy through reversible chemical reactions whereby the reactants are separated into products which can then be stored separately until they need to be used where they can be combined to form the original reactant and release energy as heat. These techniques allow the energy that CSPs harvest to be stored for much longer. Therefore, the longer the heat can be stored for, the longer a CSP can release electricity to the grid without pause, increasing the reliability of one of these plants.

Furthermore, because CSPs transfer sunlight into heat first (before turning the energy into electricity) they are also able to distribute energy directly as heat, which can be used for water desalination, chemical production and mineral processing as well as producing green fuels like hydrogen. Because solar panels convert sunlight directly into electricity, they are unable to do this, which could make CSPs a more attractive option. Other positive aspects of CSPs include their higher efficiency at higher sunlight levels compared to solar panels. CSPs are able to convert solar radiation into electricity with increasing efficiency at higher temperatures, whereas solar panels reach a maximum

Fresnel and Dish Stirling
The Gemasolar Plant in Seville, Spain. This uses pioneering molten salt heat storage

efficiency (and energy production) at lower temperatures. This makes CSPs ideal for hot, desiccated areas such as deserts (provided there is a good water supply).

Solar thermal power plants also have many weaknesses, some of which will be solved as the technology grows and develops, and some are limits that effect the entire concept. First of all, the high upfront cost of building the infrastructure - huge investment is required to fund buying the land, the heliostats and the power blocks (which convert energy into electricity), making CSPs less accessible to less economically developed countries.

CSPs need huge open spaces with high levels of direct sunlight and a good water supply. This is a particular challenge as places with higher levels of direct sunlight are often more arid. Also, the lack of efficiency is also a problem – while they are able to reach 90% efficiency for thermal capture turning sunlight into heat, this drops drastically to only 15-20% when converting this heat to electricity. This is similar to that of the solar panels which reach 15-22% efficiency. It is possible that this will improve as the technology advances.

The largest CSPs are in the UAE, USA and Morocco, with the single largest being the Mohammed bin Rashid Al Maktoum Solar Park in the UAE. Covering 77km2, the park has a target capacity of 8,000 MW of electricity. CSPs have also been successful all over the world with Spain creating 2.3GW of electricity

from its CSPs per year, the most in the world, and the US which is second with 1.5GW and China is third with 588MW. Use of CSPs in the UK is unfortunately limited due to the cloudy and often changeable weather. CSPs would not be able to achieve the high efficiency that can be reached in hotter places with more direct sunlight. Solar panels which are more efficient at low levels of sunlight and heat are much better suited to the UK.

Finally, the levelised cost of electricity (LCOE) from CSPs is relatively high. LCOE is the average cost of a unit of electricity from a power plant, taking into account the cost of construction and dismantling, as well as

ongoing maintenance. At the moment, the LCOE of CSPs is £0.07, more than double solar farms at £0.03, windfarms which vary from £0.03 to £0.08, and hydropower and geothermal costs which fall below £0.05.

In conclusion, CSPs have potential to change the energy industry, yet they are limited to places like the UAE or America which both have funds, deserts and water which is are rarely found together. I don’t think that the UK will ever use Solar thermal power yet one day maybe they will bring affordable energy to poorer countries.

The construction site of the Cerro Dominador CSP in Chile, this was partly funded by the EU
Mohammed bin Rashid Al Maktoum Solar Park

Marlborough College, Wiltshire SN8 1PA www.marlboroughcollege.org

Turn static files into dynamic content formats.

Create a flipbook