Comments Off on A Ballad of Willow Never Ceased – To Toad Hall’s 45th Birthday
“Yeah I see… but where is Toad Hall?”
That’s the most frequently asked question when I tell friends I am a Toadie (aka Toad Hall Resident). And honestly I don’t blame them. Toad Hall is one of the oldest buildings in ANU campus, old enough to disappear in the horizon and to give up on ACs. As a result, we all have to inhabit the AA Common Room during summertime to avoid the heat, but this also means we can share dinners on the big table, with sides of laughs and stories, and try for a Harry Potter Marathon Screening which we never finished (everybody fell into slumber after Azkaban).
Toad Hall has no secrets, and everybody knows that. If you talk gossip in your own room, due to the sound-transparent walls, everybody will know your little story by the end of the week. And in this way, most people choose to be super mindful of others. People in Toad greet each other like old pals, organising potlucks like a big family. After all, we already see this place beside Sullivan’s Creek as our “home away from home”.
Toad Hall is a maze, but in the best way possible. First-time guests will definitely get lost in its unique architectural design. But it’s just like life: full of uncertainties, and filled with surprises. We have a group of Senior Residents (SRs) and Community Coordinators (CCs) formed by current students working as the gatekeepers and servants for the community. Each pair of them are responsible for a different portfolio like Media, Sports, Wellbeing, and Academic. You can check our various weekly and special events on Facebook, ‘Croak Newsletter ’, or by chatting with your nice neighbours in the block kitchen. Whenever you go down to the AA Common Room, you’ll always be able to explore new hobbies and have a clan – you can never get bored in Toad.
Recently, we had our ‘Research Night’ to have three residents introduced as a result of their their professional insights in academia; we’ve also had a Toad Hall Residential Advisory Committee (THRAC) organised ‘Bollywood Night’ to teach us Indian dancing moves; and we’ve all rallied together to continue our expenditure on the newest episode of the Iron Throne. Above all that, representatives from each country/district are busy preparing their own national performance and cuisine to be put on display in the annual ‘Multi-cultural Festival’ (MCF) at the end of this month, because Toad Hall values nothing more than diversity and community wellbeing.
We just love this hall and its people more than anything else. Soon enough, we will all be leaving this place for another step in our journeys, but this good old building, with a BBQ beside the willow trees, will never perish in our heart. And one day, I’d be proud to say I’m a Toadie Alumni. Because the essence of the ANU value – things like academic excellence, equity, tolerance, and a global horizon – rests here at 30 Kingsley Street.
Comments Off on The Past Which Will Not Pass: Policy Makers Trapped Under History’s Shadow
Politicians are notorious at employing ‘history’ to (supposedly) both make sense of a situation, and, convince others that their chosen policy path is best. Think of Kennedy in the Cuban Missile Crisis who didn’t want to be Tojo, or every time, from Truman in Korea, Nixon in Vietnam or Bush in Iraq, that you’ve heard the words ‘Munich’, ‘Chamberlain’, ‘Hitler’ and ‘Appeasement’ in the same sentence. This is them doing it, and often, it’s a bit of a problem.
Nolte, a German historian, calls history like this the “past that does not want to pass away” and here refers specifically to the shadow of the Holocaust looming over the German collective consciousness. Generally, the sort of events which stick in the collective consciousness are those which are most traumatic, because these are the ones which we really do not want to live through again.
The Munich analogy is one of the most widely employed historical analogies: a glaring word which suggests that all dictators are alike, and appeasement will always fail. It refers, of course to the policy of appeasement which the Allied powers undertook in the 1930s towards German aggression, which evidently failed to stop the outbreak of WWII. Used like that, Munich can only mean failure and weakness.
George Bush, in 2003, rationalised his intervention in Iraq on the grounds that Hussein possessed nuclear weapons, and that the situation was getting ever closer to that of the 1930s. To Bush, appeasement in Munich resulted in a World War, therefore, appeasement in Iraq would result in a World War again. It’s a rhetorically powerful argument: Hussein, by refusing to comply fully with international weapons inspections which had been agreed upon by the UN, was similar to Hitler agreeing to the annexation of the Sudetenland before later adding more and more unreasonable demands. So too, had the roles of various players been appointed, Hussein was the villainous Hitler, Germany and France who were willing to appease his demands were cast as Chamberlain and, of course, Bush cast himself as Churchill. To Bush and his supporters, there was only one viable option: invasion, for had the Allies intervened in 1938, World War II and the Holocaust could have been avoided.
For Bush, the biggest problem of all was that despite the fact that there were surface commonalities between the situation in the early 2000s and that of the 1930s, the two otherwise had very little in common. Hitler was the aggressive leader of the most powerful military state in Europe, and the Munich Agreement gave sovereign territory of an established democracy to him. Hussein, on the contrary, had received nothing but punishment in the form of military and economic sanctions which were crippling the state and limitations on its sovereignty since the Gulf War.
Furthermore, even within the US itself, there were those who did not agree with Bush, and employed the famously contradictory analogy of Vietnam which warns instead of the dangers of intervention in developing states, and particularly, of engaging in wars which are asymmetrical and fuelled by nationalism. Politicians hence pick and choose the analogies that best serve their own interests.
Of course, the comparison that George Bush made between Hussein and Hitler posed a particular problem for one of the US’ allies across the Atlantic – Germany. All discussion of German history has been said to both begin, and end, with the Nazis. But simply saying that Hussein was Hitler was too simple for the German policymakers at the time, who were wary of any unprovoked aggression.
The rhetoric coming out of Washington greatly complicated matters within the Bundestag. Bush’s “axis of evil” of North Korea, Iran and Iraq and his characterisation of Hussein as Hitler seemed far too black and white to the Germans who had been instilled with a sense of fear relating to fervently-held ideological belief. Indeed, Schröder had once before described Bush as diverting attention away from domestic problems with a tactic “Hitler also used”.
Yet even in the Bundestag, there existed two separate interpretations. In this case, the Oath upon which German Basic Law had been founded “Never Again War, Never Again Dictatorship” was seemingly contradictory.
Schröder’s government focused on the first part of the oath: never again war. To Schröder, unprovoked wars of aggression would always be synonymous with those of the Nazis. So, Schröder was also using the lessons of the 1930s, but was managing to arrive at a different conclusion. For him, Bush’s ‘War on Terror’ did not satisfy the conditions of multilateral action under which Germany was prepared to aid intervention, nor did Resolution 1441 justify the use of force in Iraq. Therefore, he would not allow himself to aid the ‘Nazi-like’ aggression. Furthermore, according to such an interpretation, Germany, as a democracy has a responsibility to sit on the ‘right’ side of history now and refrain from any war, ever.
By focusing on the second half of the double oath, ‘never again dictators’, his Opposition also employed the Munich analogy, arguing that Germany knew firsthand the dangers of dictators with too much power. CDU member Michael Glos is quoted as saying; as “a people which is responsible for the Holocaust because it was not able to stop a dictator in time” they too, had a responsibility to stop Hussein.
So, while it is rhetorically powerful for policymakers to employ analogies in order to persuade or dissuade their publics, the past is immense, complex, and open for interpretation. And these interpretations can lead to not only different, but completely contradictory policy prescriptions. How helpful is history, really? A difficult question to answer, so be wary of the next time you hear a politician using historical examples to describe the present.
On 14 June, 1950, the student journal of what was then the Canberra University College announced a name change. In the search for something “more inspiring” than the original name, Student Notes, the editors decided to pick a title from an Aboriginal language, “because it is far more significant to us, particularly in the Capital City of Australia, than any word of foreign origin.” They chose the word ‘Woroni’, which they stated meant ‘mouthpiece’. Today, 68 years later, Woroni’s Wikipedia page repeats this etymology, declaring that the name “derives from an Indigenous Australian word meaning ‘mouthpiece’.” Over the past 68 years, a key question has remained unanswered. There are estimated to have been 250 different language groups in Australia before European invasion, 120 of which are spoken today. If Woroni is genuinely derived from an Aboriginal language, which of these 250 languages does it come from?
Some past editions of Woroni have claimed that the publication’s name is derived from the Ngunnawal language spoken in the Canberra region. There is no evidence to support this claim, which appears to be based on guesswork. Woroni’s 1950 editorial team were following a long tradition of settler Australians appropriating Aboriginal and Torres Strait Islander words to name a wide variety of things, from place names to literary journals such as Meanjin, as part of a broader search for an authentically Australian identity. A number of books were produced in the twentieth century to assist in this endeavour. One of the most popular was Sydney J Endacott’s Australian Aboriginal Native Words and Their Meaning, which went through ten editions between 1923 and 1973. Endacott praised “the use of musical native aboriginal (sic) names … with advantage to the furthering of the growth of a distinct national feeling.” He hoped to fulfil a “demand for a substantial and reliable list of pleasant-sounding words”. The Woroni editors most likely chose their publication’s new name from Endacott’s widely available compilation, where it is listed as meaning “mouth” – the extension of this to “mouthpiece” may be an example of the editors’ creative licence. Endacott gave no indication of the origin of the words he listed. Their cultural context was of no importance: what mattered was whether they could be used as a “pleasant-sounding” name. Uncovering the true origins of Woroni requires a little more digging.
Endacott claimed his book was the result of “much sifting of lists of words, and a good deal of research among old books and journals.” One of the sources he would have consulted was Edward M. Curr’s four volume work The Australian Race, published between 1886 and 1887. Curr was a major landowner in Victoria, who was intimately involved in the dispossession of Aboriginal peoples on the colonial frontier. As a member of the Victorian Board for the Protection of Aborigines, he advocated for the incarceration of Aboriginal Victorians who had survived the frontier wars, likening them to “children” and “lunatics”. Simultaneously, he dedicated a considerable amount of time to recording Aboriginal language and customs, believing he was preserving cultural relics of a people doomed to extinction. A major part of Curr’s work were wordlists of Aboriginal languages he had collected from three hundred correspondents across Australia. It is in one of these wordlists, contributed by a Thomas Macredie, that we find ‘Woroni’. Here it is defined as meaning “mouth”, and is said to come from the geographical area of Piangil, in northern Victoria. According to the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS), the language spoken in this area is that of the Wadi Wadi nation. Wadi Wadi country straddles the Murray River in northern Victoria. Despite the effects of the colonial invasion of their lands that began in 1846, the Wadi Wadi people have survived and continue to care for their ancestral country. Descriptions of their innovative land management techniques can be found in Bruce Pascoe’s influential book Dark Emu.
What are the implications of this? The name of ANU’s student newspaper was not chosen as a result of consultation with Wadi Wadi people. It is highly unlikely that the editors at the time were even aware of the Wadi Wadi language. In the words of historian Samuel Furphy, the use of Aboriginal words for naming by settler Australians “has very rarely been the result of sensitive and meaningful cultural interchange.” Referring to the use of Aboriginal and Torres Strait Islander words as Australian place names, the Koori novelist and historian Tony Birch writes that “Houses, streets, suburbs and whole cities have Indigenous names. This is an exercise in cultural appropriation, which represents imperial possession and the quaintness of the ‘native’. For the colonisers to attach a ‘native’ name to a place does not represent or recognise an Indigenous history, and therefore possible Indigenous ownership.” Words are a vital part of Aboriginal culture, but many settler Australians have valued them only for their novelty. At a time when government policies aimed to erase Aboriginal and Torres Strait Islander languages, the Woroni editors of 1950 chose their publication’s name without concern for its origins or cultural context.
Many questions arise when considering this history, including: Given Woroni’s stated commitment to standing with Aboriginal and Torres Strait Islander people, in what ways can it ensure that this careless appropriation is not perpetuated, and that the Wadi Wadi origins of its name are honoured? Given the intimate links between Aboriginal languages and country, what are the ethics of using a Wadi Wadi name for a publication produced on Ngunnawal and Ngambri lands? Would a collaboration with the Ngaiyuriija Ngunawal Language Group, which has been working to revitalise the Ngunawal language, produce a more appropriate name? There are no simple answers to these questions, but they should be carefully considered.
Note on Sources
This article would have been impossible without the assistance of Michael Walsh of AIATSIS and David Nash and Harold Koch of the ANU School of Literature, Language and Linguistics.
Macredie’s wordlist is on pages 448-451 of the third volume of Curr’s The Australian Race.
The quote from Samuel Furphy is from his article “Aboriginal place names and the settler Australian identity” in Melbourne Historical Journal 29 (2001): 71-78.
The quote from Tony Birch is from his article “‘Nothing has Changed’: The Making and Unmaking of Koori Culture”, in Meanjin 51(2) (1992), 229-246.
There are numerous spellings of Wadi Wadi: I have used that used by the Murray Lower Darling Rivers Indigenous Nations group.
Mad Dan Morgan is a little known, and therefore incredibly underrated, figure in Australian history. Born Jack Fuller (1/10 – boring name, due an upgrade), Mad Dan was adopted by ‘Jack the Welshman’ (8/10 – a descriptive, but not particularly original name), and eventually became a criminal. Disclaimer: there are a lot of names in this narrative, particularly because as soon as ‘Mad Dan’ ditches ‘Jack Fuller’ he becomes ‘John Smith’ (10/10 – purely because it’s the most stereotypically common and mundane name in history). Mad Dan was basically a red hot criminal dude. A quick highlight of his show reel includes: stealing horses, saddles, bridles and selling them, as well as holding up various small, rural businesses. He terrorised the Australian Outback for years, and was pretty famous for it. He was a big enough deal that a fair few business owners basically had the policy that if Mad Dan showed up, you did what he wanted, and you didn’t kick up a fuss. Some even went so far as to hang up food in the outback for him, hoping that if he had easy access to food he wouldn’t turn to crime. Spoiler alert: this usually didn’t work, primarily because Mad Dan DGAF.
Most importantly, our Mad Dan had a flair for the dramatics, and went by several fantastic names, including ‘Down the River Jack’ (20/10 – best name I’ve ever heard, a classic). His companion, ‘German Bill’ (5/10 – because compared to ‘Down the River Jack’ it sucks), helped him with a variety of stunts, but eventually the pair was surprised by a party of police. Dan (the bastard!) turned on his friend and shot him point blank, thus evading arrest. He continued being a great criminal – as in, great at being a criminal, not just great in general – and had a good ol’ time.
But Mad Dan was more than your simple run-of-the-mill hooligan. When holding various businesses at gunpoint, he would steal alcohol from the business’ stash and feed it to his tied-up prisoner’s/hostages. At his core, Mad Dan may have been little more than a lonely boy, looking for a classic rave. But, alas, he wasn’t quite cut out for the party life, instead getting annoyed at the noise-levels, and then blowing people’s brains out (classic “this music’s too loud” move). At other times he made people cook for him naked, forced women to sit on fires, pretended to be a house guest at the places he robbed, and was generally a bit of a weird bloke.
Finally, after years of shenanigans, Mad Dan Morgan came to his untimely end when he was shot in the back by police, but, luckily, the fun didn’t end there. After his death, Mad Dan was identified by literally almost a hundred of the people that he held up: his face was CUT OFF (how?) and his scrotum was made into tobacco pouches – no doubt passed down through the generations. If you have a Mad Dan tobacco pouch please slide into my DMs.
And there you have it, folks. The fun, weird and overall exciting life of Mad Dan Morgan (who should be known as ‘Down the River Jack’, because it’s by far his best name): an all-round kooky guy with a flair for dramatics, and no conscience.
Comments Off on After Auschwitz, There Can Be No Poetry: Guilt, Germany and Gunter Grass
After Auschwitz writes Adorno: “to write poetry would be barbaric”. For how can the language of burning books, extermination camps and final solutions be used to create beauty once more?
German author Gunter Grass, awarded the 1990 Nobel Prize in Literature for his portrayal of the “forgotten face of history”, attempted to answer this question. In a 1979 speech he implored: “what shall we tell our children … what are we to say of the German guilt that has lived on from generation to generation?”
This German guilt, according to Grass, was a necessity for a country that had allowed Hitler to rise to power through its own democratic institutions. Whose ordinary men and women had supported his ideals. Indeed, in his 1963 novel Dog Years, he writes about magic spectacles that allowed German children to become privy to just what their parents had been up to in the years between 1939 and 1945.
He stripped the mythology away from the Nazis, rendering them as someone’s father, neighbour, brother, son. Creating a working German identity was predictably difficult – when Hitler ordered the destruction of the Third Reich moments before his suicide, nihilism essentially triumphed. What Grass wanted was to drag the dark years out from under the rug and openly speak about them. To try to create a working German identity, post-Auschwitz.
The atrocities committed during the Second World War in the name of the German people contaminated, according to Grass, the very German language itself. The only way to cleanse this was to drag it through ‘literary’ muck. Grass’ 1959 The Tin Drum embodied this muck, as well as its author’s mission, to awaken the German conscience to the horrors in which ordinary people had partaken.
The images that rest with the reader long after the final page has been turned are disturbing. An attempted sexual encounter between a dwarf and a nun, sex being allegorised through fizz powder, eels eating a horse’s head from the inside and a girl pissing in a pot of soup, all narrated by a precocious three-year-old dwarf born with the brain of an adult and the ability to shatter glass with his screams. The only appropriate response is violent recoil. And is there any response more apt?
Yet, much as the novel was blasted as an “exercise in blasphemy and pornography”, it does attempt to move beyond such literal readings. Grass’ magical realist text cannot avoid its place in history. Indeed, when the Nazi atrocities became public, people had little choice but to admit ‘collective guilt’, and Grass mocks this very notion. He essentially sought, with The Tin Drum, Dog Years and Cat and Mouse which together make up the Danzig trilogy, to blast a hole into the collective amnesia of a German generation which wanted instead to focus on economic growth and progress.
But what Grass seemed to fail to mention, whilst encouraging others’ responsibilities, was his own individual guilt. In 2006, almost 60 years after the fact, he admitted that he himself had been a voluntary member of the Waffen-SS. Before this, he had largely been assumed to be one of the generation too young to have been active in the war. And why had he joined?
He cannot put his finger on a response to this question. In a piece for The New Yorker he cites his animosity towards his stuffy, cramped and Catholic home. In the end, however, he seems to lead inevitably to the conclusion that what drew him to enlist were dreams of glory. The allure of black and white newsreels which advertised black and white truth to his seventeen-year-old self. Nobody ever lost wars in the news. Even up until the very end of the war, Grass has been quoted as saying he still believed they would win.
Of course, his confession was immensely critiqued. Many called for his works to be –, if not banned – at least ignored. He was seen as a moral hypocrite who had encouraged others to examine their individual responsibility in the war effort, whilst ignoring his own. One notable hypocrisy was his denunciation of Raegan and Kohl’s 1985 visit to a cemetery where Waffen-SS members were buried, even though he himself had been a member of the SS.
The man once described himself as “inexorably attuned to contradiction” and there is perhaps no better way to describe him than as inherently contradictory. Born 1929, Danzig, to a German father and a Kashubian mother, he came of age in a continent torn apart by hatred. Danzig, now the Polish city of Gdansk, was the first territory to be captured by the Nazis.
His political views are interesting, to say the least. Politically left-leaning, he denounced revolutions, whilst defending Castro’s Cuba, and encouraged a slow move towards progress. He called Catholic and Lutheran ideologies moral accomplices of Nazism. Intensely anti-nationalist, he argued against German unification on the grounds that people responsible for the Holocaust had forfeited the very right to self-determination. That Germany was better weakened so that it did not attempt to become belligerent once more. He denounced repression in the Soviet-bloc and fundamentalist religious governments. Then simultaneously criticised Western capitalism and was especially angered by Germany arming Hussein. Furthermore, he got himself declared persona non-grata by Israel after publishing a poem in which he declared that Germany must not continue to aid arming the country.,
Grass ends up tainted by the brush which tainted all of the twentieth century. Depending on the way you look at it, this either make his works and words completely senseless and false, or, even more so the conscience of the nation and humanity itself. Or, does his denial of his own truth speak to a deeper level about the very ambiguity of truth and memory itself? He analogised memory to being like an onion. How many layers must we peel back before we arrive at any semblance of meaning?
Comments Off on Mythbusting History; how did the science of the Nuclear bomb keep the Cold War cold?
The effects of Hiroshima and Nagasaki left a devastating impact on the world, whilst cementing the US as a major strategic power. The lessons learnt from the power and force of nuclear weaponry significantly influenced the introduction of the nuclear strategy of deterrence. The taboo associated with using these weapons of mass destruction was discussed by many historical figures of the time. The anti-nuclear war notion held by these figures led to other avenues to relieve the Cold War tension between the West and the Soviet Union. The science behind the development of the atomic bomb, however, led to other uses for nuclear energy within Western military strategy, including the use of submarines in naval espionage that aided to keep the Cold War cold.
The use of the atomic bomb in late WW2 left Japan devastated from its effects, whilst elevating the US to a much stronger position as a world power, with an emerging influence stretching from Europe to Asia. As illustrated through a number of nuclear weapons created by the US between 1945 and 1962; over 3,000 missiles with nuclear capabilities, whilst the USSR only procured approximately 1,500 missiles with nuclear power. This disparity between the two world powers of the time suggests rising tensions and the West’s domestic concerns over Soviet military actions. The United States’ policy of deterrence in military strategy demonstrates the levels of precaution taken with these weapons in the 1950s by Western nations. The Truman administration’s attacks on Hiroshima and Nagasaki left an impact on the political mindset regarding the application of nuclear weapons. This is exemplified in Truman’s press conference in 1950 where he described the bomb as a weapon “of mass destruction” with significant impacts on non-combatants in the war – “innocent men, women, and children who have nothing to do with the military aggression”. This illustrates the anxiety surrounding weapons of mass destruction and highlights the growing taboo of the use, rather than the development of, nuclear weapons by the US and other Western nations. Historians have argued that the rationality of States in this period was a major factor for the non-use of nuclear arms. This rationality, the taboo against using these weapons, the fear of mutual destruction held by both West and East, also led to the removal of the potential for ‘total’ war seen in the earlier part of the 20th century with WW1 and WW2. The gap between US and Soviet expansion of nuclear weapons was another issue raised by historians to counter aggressive American growth of weapons of mass destruction. As a result, Western and Soviet military strategy turned to espionage and early technological warfare and redirected nuclear technology use to other military means, including powering the engines of submarines.
Espionage and the emerging use of strategic intelligence from the 1940s to 1960s was another military policy that was used by Western nations. Human, signals, and technological espionage was widely used by both the USSR and the US. The US harnessed technological espionage for mobile collection operations that had the ability to collect intelligence across Soviet borders. Between 1947 and 1960 as many as 13 US intelligence flights were shot down over Soviet territory. These flights, however, heavily impacted Soviet intelligence groups, such as the NKVD, and subsequent retaliation of Soviet intelligence forces created an ‘intelligence war’. This retaliation resulted in a pursuit for more innovative and less invasive intelligence systems to be developed after 1960. Despite the ‘intelligence war’s’ need for a high level of secrecy, some aspects became surprisingly public. For example, the 1945 defection of the code clerk Igor Gouzenko, from the Soviet embassy in Ottawa publicly revealed the scale of Soviet espionage on, then supposed, allies in Northern America. This led to regular public trials of these ‘atomic spies’ in both the West and the USSR and spurned an East-West ‘war of words’. The intelligence war of the ‘First Cold War’ was the main impetus for modern intelligence agencies, with many of the organisations founded in the 1940-1950s still operating today. The use of these technological and signals-based intelligence operations therefore became a key component of the arguments for non-use or use of atomic weapons throughout the Cold War.
Submarines were one of the main espionage instruments in use during the Cold War, both before and after the 1970’s. US nuclear submarines were developed in 1951, and the British joined them in 1960. These submarines allowed Western maritime strategy to expand, with nuclear reactors needing no fuel, and they had the ability to remain below periscope level for longer periods of time. This contrasted with previous submarine technologies that were primarily diesel electric, which would routinely have to come to periscope level to take on air every 36 hours. The emergence of the new engineering techniques of nuclear submarines suggests that nuclear development – and weaponry – had a major impact on the strategies of the US, British, and even the Soviet Union. Submarines could not only be a tool for gaining maritime communication intelligence but also provided a mode of transport for weapons (both nuclear and otherwise) during the ‘First Cold War’. Although Western nations – particularly the British and the US – were preoccupied with the perceived threat of nuclear warfare through the air-powered atomic bomb, the development of submarines allowed blue water navies to engage a more balanced fleet of surface and underwater warfare. The enhancement of intelligence technology particularly submarines clearly derived from the fear of nuclear war, with the growth of espionage a major outcome of the development of nuclear technology on the conduct of the Cold War.
The policy of deterrence of nuclear weapons in the US suggested that States had an underlying rationality and credibility which would prevent them turning to nuclear war. Intelligence and espionage were major military strategies during the Cold War, as the ‘intelligence war’ provided an opportunity for Western nations to enhance military equipment and weaponry, most notably submarines of the US and the Royal Navy.
Comments Off on The History of Economics: Products of their time
As a subject, economics is generally focused on producing predictions and assessments of future outcomes. This is highly important; without such forecasts, the economic growth that established our high standard of living would have been very difficult to maintain, let alone initiate. However, the history of these economic ideas – critically the circumstances of their creation – is seldom studied widely. This can be a profound disadvantage when it comes to assess the validity of models and applying economic theories. Such economic-ideas are always the product of their time and location. Applying them blindly and without forethought to a situation can create dire results.
Illustrating the history of economic ideas is perhaps the best evidence one can give. While many consider Adam Smith to be the father of economics, speculation on economic matters long preceded him. Greek and later medieval philosophers, while generally uninterested in economics, did make passing references that would be considered today as the key tenants of economic theory. The Greek poet Xenophon noted the benefits of free trade between Greek city states (including peace and prosperity) while Nicole Oresme of Normandy was arguably an early monetarist (a school of economic thought that focuses on the role of money in an economy). Even the thoughts of the mercantilist era, which Smith spent most of his magnum opus (The Wealth of Nations) relentlessly refuting, introduced interest on loans, stock markets and, in several Dutch writings of the time, rudimentary demand and supply curves (that make up the fundamentals of microeconomics as every first year CBE student will know). It was a time not so economically foreign to our own; there were stock market crashes (several major downturns occurring in London and Paris in the early 1700s), the accumulation of wealth by a new ‘merchant’ class and the rise of significant monopolies such as the East India Companies of the Western colonial powers.
This conceptual lineage heavily shaped Smith’s thinking and beliefs. In late 18th century, the ideas of mercantilism were persisting into an alien industrial world. The impetus was now on the manufacturer of goods rather than those who merely provided the material, as had been since time immemorial. The relics of aristocratic privilege and merchant protection had created inefficient monopolies and obsolesce that stifled growth (such as James I of England’s ‘statues of monopolies’ in 1623). Most egregious of all to Smith was the idea that nations should always maintain a positive balance of trade, importing more than they export, as a sign of economic prosperity. By demolishing these decaying policies, Smith firmly established in their place opposing concepts, including free market competition, free trade and fundamental theories of price and wage determination that set the foundation of modern economics.
The true ramifications of the industrial revolution, which Smith never saw, meant that the large, polluted factory with a huge workforce became the bedrock of the economy over the traditional agricultural sector. Goods were now being produced on a gargantuan scale and prices, wages and labour were new variables for policy makers to consider. Consequently, disciples of Smith updated his ideas. Jean-Baptiste Say, a French industrialist, argued that supply always finds equilibrium on the demand curve, implying that goods in the market will always find buyers (known as Say’s Law). The stockbroker David Ricardo argued, amongst other things, that Smith’s competitive market model was a self-correcting mechanism and that the rational individual pre-empts any government intervention (the Ricardian equivalence). By the end of 19th century, these original economic concepts still permeated throughout economics; forming the foundations of a highly theoretical discipline largely removed from the workings of the everyday economy. Assumptions of the rational consumer and producer formed the basis of models used to predict market outcomes. However, the blind application of these assumptions from another time and place collided violently with the most serious economic crisis of our era: The Great Depression.
Given their classical economics training, policy makers initially advocated no intervention in the economy. Following indirectly from the ideas of Ricardo and other ‘neo-classical’ economists, it was believed that the market regulated itself and this ‘panic’ would soon be over. However, no reaction simply resulted in unbounded mass unemployment, with consumer, producer and bank panic arresting economic activity. By 1937, with the economy seeming to be heading towards another recession, policy makers firmly turned to new ideas. It was in this atmosphere that the ideas of John Maynard Keynes took centre-stage. Showing that (unlike the industrial revolution) it was possible to produce too much and have a shortage of demand, Keynes stated the government’s role in the economy to increase spending and lower taxes to make up for this demand shortfall. Otherwise, the economy was poised to stay in this nightmarish crisis, which could easily become a new equilibrium point (contrary to Ricardo). It was only with this crisis and acceptance of the failure of the prevailing paradigm did economics begin to grow to include Keynesian economics (the birth of macroeconomics), which could tackle the economic depression in a better manner.
This history lesson is still highly relevant today. Economics, like all disciplines, is subject to being stuck in the prevailing thinking long after those ideas have expired. In 2003, Robert Lucas (1995 Nobel Laureate in Economics) claimed that macroeconomics had been perfected and that recessions were now a thing of the past. As it turned out, his classical approach to macroeconomics failed to predict the onset of the Global Financial Crisis (GFC) in 2008 or provide a remedy. One must always attempt to maintain an open mind within economics and look deeper into the ideas being used; namely their origin, history and evolution, to see their predictive power in the modern world.
Comments Off on Aether: Substance of the Cosmos and Breath of the Gods
“It’s an energy field created by all living things. It surrounds us and penetrates us; it binds the galaxy together.” ~ Ben Kenobi
Between the bright shining moon and the luminous blue earth, a realm of earth, fire, air and water exists. Here lies the Sublunary Sphere – the region of aether, unbounded by the laws of physics, characterised by an expanse of planets and stars.
What is the Aether?
The Origin of the Word
In fiction, there are often depictions of magic that utilise four elements: earth, fire, wind and water. They are collectively known as the ‘Greek classical elements’, and were once thought of as a kind of energy that embodied all living things. However, there is also a supposed fifth element belonging to this group – the ‘aether’ or ‘void’.
The word aether comes from the Homeric Greek, meaning ‘pure, fresh air’ or ‘clear sky’. In Greek mythology, the god ‘Aether’ also embodies the pure upper air that the gods breathe.
Its Importance in Science History
Aether was a component of a bygone model of the cosmos characterised by an earth centred world.
The concept arose because of light’s wave properties. In nature, most waves exist in a medium, like waves in water or sound through air. At the time it was assumed that light waves needed a material to move through. Hence, a medium for such waves – the luminiferous aether – was born.
The End of Aether
Between 1887 and 1904, the existence of the aether was disproved, as a number of experiments discredited its centuries-long existence.
In 1887, Albert Michelson and Edward Morley performed the most notable failed experiment that measured aether drag. When we move through the air, we produce drag as the air in front of us is slowed down. If the earth is travelling through aether, it should produce an aether drag that slows light down. Thus, by measuring the supposed change in the speed of light, it was hypothesised that aether could be detected. In the Michelson and Morley experiment, no change was observed, providing the most accurate measure against the validity of aether.
The consensus on the aether’s existence further shifted when Einstein’s Theory of Special Relativity was released in 1905, which didn’t use aether to explain light movement but demonstrated that the speed of light was absolute.
To gaze upon the history of the cosmos from a 20th-century perspective, it is interesting to see how our scientific understanding has evolved. It begs the question, what will the history of science look like a hundred years from now, and what changes in thought can we expect over the next few hundred years? How ludicrous will our current scientific truths be in 2118?!
A Brief Timeline of the Aether
400 BC: Aether was first used by Aristotle in the 4th century BC, who described it as an element lighter than air that surrounds celestial bodies. In the hierarchy of elements, aether is lighter than air, air is lighter than water and water is lighter than earth. In the view of Aristotle, each element returns to its proper place when displaced, thus explaining why air rises when earth and water fall, and why the ‘heavens’ appear to remain in place.
1704: Opticks,published by Isaac Newton in 1704, explained the properties and behaviours of light. Previously, white light was thought of as colourless. However, in Opticks, Newton shows that white light is an accumulation of all colours and can be split into other colours through prisms. He introduced an “Aethereal Medium” to try and explain diffraction – the behaviour of light when bending around corners or passing through slits.
1856: By 1856, aether had gained a magical list of requirements for its existence. It had to be a fluid to fill space, millions of times more rigid than steel to support high frequencies of light waves, massless and without viscosity so it didn’t visibly affect the orbits of planets, completely transparent and incompressible. Scientists were aware of the problems associated with aether, but the concept had become so entrenched in physical law, that scientists were not prepared to discredit it.
1887: The Michelson- Michelson Morley experiment was conducted, which pointed against the existence of aether.
1905: Albert Einstein published his Theory of Special Relativity, which failed to acknowledge the existence of aether.
1925: Special Relativity had been widely accepted, and the debate on aether’s validity slowed. Up until 2003, more tests are performed, with the clear majority disproving the existence of the aether.
As you may have heard over the past few weeks, coordinated tripartite attacks from the US, France and Britain have taken place in the outskirts of Damascus. In this tired seven year war, it’s often quite easy to lose track of who is fighting who and whom supports whom in the quagmire of rhetoric, accusations and generalisations. In this piece, I will be examining this war from a sectarian point of view, so buckle in for some rhetoric, accusations and generalisations..
It’s important to first know who is in the fight and why they are fighting. Put simply, it is the current regime vying to defend its existence against a plethora of rebel groups, some of whom like or dislike each other more than the regime. The current regime is the Syrian branch of the Arab Socialist Ba’ath Party, which claims to follow the tradition of Ba’athism (Arabic for ‘Renaissance’), however, it is often argued that it is a corruption of the ideology and many have begun to label the regime as neo-Ba’athist. Simply put, the main tenets of political Ba’athism are Arab unification, Arab nationalism, Social and Economic progressiveness, one-party rule and secularism. It is the latter two tenets that have fueled the fire of rebellion and led to the current conflict we see in the Middle East.
It is the forced secularisation and the unwillingness to accept political pluralism established by the current regime that rebel groups use as their justification. They accuse their secular minded government of being the enemy of Allah, and of punishing devout citizens. Those who are seen as being too religiously minded are put on watchlists, as sheikhs and imams are under strict government surveillance, and wearing the hijab is frowned upon in the public sector. Islamist rebellions have been happening in Syria since the ‘70s ever since Bashar’s own father Hafez Al Assad implemented a constitution which omitted the requirement that the president of Syria must be a muslim. Many see this action as a response to the fact that Assad family are of Alawite origins, a little known about sect whom many mainstream sunnis do not consider to be muslim.
Ultimately, what once began as hopeful protests that reached every segment of society inspiring hope for a more pluralistic future has degenerated into a matter of pure sectarianism. Here’s a basic break down of the many parties involved:
Sunni Muslims: Generally support the rebellion. View Assad as having oppressed Sunni Islamist movements and view his secularisation of Syria as being forced and specifically targeting them. View his rule as disenfranchising, the majority Sunni population generally favour democratic reforms as they are the vast majority of the population and thus are set to benefit from them the most. There is great diversity in the groups they support, with the most radical supporting ISIS, Islamists supporting the various Islamic rebel groups such as Jaysh Al-Islam (Arabic for ‘the army of Islam’), and non-sectarian moderates supporting what is left of the Free Syrian Army.
Christians: Have been an oppressed minority in the region for centuries and this has been recently exacerbated with the emergence of ISIS and other jihadist organisations. They have often had to rely on rulers for protection. Support Assad for his perceived role as the bulwark against political Islam. Their position is viewed as begrudgingly supporting the government for its secular protections.
Alawites and Shiites: Despite not being considered a denomination of Islam by mainstream Sunnis, Alawites, the sect that the ruling Assad family hails from, has developed a strong affiliation with the Shia power of Iran and it’s Shia proxy in nearby Lebanon. Like the Christians, they fear retaliation by the Sunni majority if Assad loses power. They are traditionally the bulk of Assad’s support base and see Iran and Russia as their allies.
The Kurds: A distinct ethno-linguistic group that have traditionally been victims of campaigns of forced Arabisation and cultural genocide. Seek to establish cultural autonomy at the very least, are anti-Assad and also do not get along with many of the other rebel groups as their rebellion is mostly ethnic based, as opposed to religiously or politically motivated.
It is important to remember this is by no means a comprehensive guide of the war. No article of this word count could accomplish that. It is also important to note that many other actors and stakeholders have been left out, such as the Druze and Assyrian minorities, and mentions of external actors have been kept to a minimum. However, hopefully this informs you just enough to begin to make up your own views on the war.
In 1968, university campuses were aflame with political activity worldwide. Students challenged university administrations, railed for civil rights, as well as, against racism and inequality in their own countries. It was the year students fought against colonialism, and the imperial barbarity in Vietnam. Inevitably, 1968 was the year of the student.
In France, among other countries, student movements linked with radicalising workers, thereby shaking the seemingly stable foundation of capitalist rul. Openly authoritarian or nominally democratic, governments were threatened by this movement, and responded with brutal and, sometimes, even deadly force.
Such experiences led millions of young people across the world to identify as revolutionaries. Fifty years on from this momentous year, student radicals should look to and celebrate 1968.
Across the world, student numbers were growing, and funding didn’t keep pace. Students were being crammed together in unprecedented numbers, which conflicted with their expectations of university life. Then, like today, university administrators, backed by some student right wing drips, demanded that campus be free from politics. Universities were increasingly expected to work alongside business -familiar today, creating a contradiction for students who expected education to be about broadening minds. These contradictions led to rupture.
As all this was brewing, the horrors of the Vietnam war were becoming harder for the ruling class to obscure. Students in Australia helped friends to evade the draft, even hiding them in student union offices. Japanese students blocked the streets to stop visiting American dignitaries, American students regularly protested draft and recruitment centres. Importantly, students linked both their own frustrated ambition and the war with the for-profit education model – a education run according to the profit motive and linked with business would always deny students fair rights and conditions, and tie itself into the capitalist war machine, with admins using their campuses as recruitment and R&D grounds for weapons manufacturers. Again, this is a familiar reality today, with ANU boasting a plethora of military and fossil fuel industry ties.
As student protests turned into anti-capitalist politics, critiques of war, and demands to remove profits and business from campus, universities and states cracked down with increasing brutality. Violence pushed many students to the conclusion that they were revolutionaries, and that as students, alone, their power was not enough. The student rebellion in France rocked the world – students and workers united, after days of student battles with police, aviation workers walked out. Within 5 days, 9 million workers were on strike. Sections of students were won to the politics of solidarity, centred on working class struggle.
The year of 1968 is an inspiration to all students who search for a way to build a better world, and it offers key lessons. All the issues students faced were fundamentally linked with capitalist profit making and competition. The other great issues of 1968 – the Vietnam war, denial of civil rights, repression of workers, sexism and homophobia – all were also fundamentally linked with capitalism. All the oppressed and exploited people of the world had,and still have a common interest in ending the system that produced these horrors. Their liberation was, and still is, bound together, and only anti-capitalist politics, based on the politics and power of the working class, could achieve liberation for all.
“Be realistic. Demand the impossible”, the French youth insisted, encapsulating the spirit of 1968 student rebels. This slogan should continue to inspire us today, to not just demand the “realistic”, but to set our sights on the destruction of capitalism, and have this politics inform our struggles for a free education system today.