Flying cars. Space colonies. Immortality. Humanity has always attempted to predict the future, but is often grossly inaccurate. Even at a recent artificial intelligence conference held in China, futurist Elon Musk claimed that 99.99% of predictions that human beings had in the history about the future turned out to be false. Nonetheless, predicting the future is appealing because it allows us to prepare for it. However, it also brings with it dangers such as undermining public confidence, exacerbating the impacts of wrong predictions, and possibly even catalysing the occurrence of the sheer undesirable.
Predicting the future is appealing because it allows us to better prepare for it, especially giving us partial control over the outcomes of imminent events, from the level of abstraction of the individual to the society. Weather forecasts allow commuters to better plan their travel. Market predictions allow economists to shape more appropriate economic policy to protect, secure and create employment opportunities. Advanced projection allows the authorities to forecast the likely impact of natural disasters on the states which they govern, equipping them with information to coordinate early warning, evacuation, and the organisation of relief efforts ahead of time, thus saving countless lives. The deleterious impact of inadequately predicting and acting on predictions of the future can be seen in the 2010 Haiti earthquake, which was the strongest recorded in the region for 200 years. Hampered by social and political instability, the inaccuracy of prediction compounded by inaction of the authorities resulted in the disaster claiming the lives of more than 200,000 people, injuring 300,000, and leaving 1.5 million residents internally displaced in a humanitarian crisis that the country is still struggling to recover from today. Had the authorities acted early and swiftly, the aftermath of the disasters would likely not have been as severe. Thus, it is evident that predicting the future is highly appealing as it enables us to better prepare for and minimise the negative impact of undesirable future circumstances.
However, the efficacy of the preparation for the future depends largely on the accuracy of the predictions and methods of deriving them. This may be a problem because acting on a wrong projection or acting prematurely could be more detrimental than not acting at all. Not only will there be an inefficient allocation of resources to prepare for the wrongly identified probable outcome, but also that a wrong projection undermines public confidence in the system that could compromise the efficiency of future action. In 2009, a novel influenza strain H1N1 emerged in North America that had prompted the World Health Organisation (WHO) to declare a Public Health Emergency of International Concern (PHEIC) when only 3 countries had been affected at the time of declaration. This declaration was widely criticised as unnecessarily fueling public fear and anxiety, likely leading the WHO to review its processes for which situations warrant its designation as the highest level of alarm, thus compromising its ability to act more swiftly in the next global pandemic which the world will witness as COVID-19 caused by the novel SARS-CoV-2. This demonstrates that even though proactive efforts by the WHO have prevented more deaths in the first place than acting in a purely reactionary manner, the public easily misconstrues this lack of death and disease as evidence of an overreaction on the part of the authorities, likely causing them to be less trusting of the authorities and mainstream media, hampering future action and cooperation. Thus, a wrong attempt at predicting the future or prematurely acting on predictions of low confidence could be more dangerous than not having predicted at all.
Even if the predictions can be ascertained to be highly accurate at the time of its declaration, attempting to predict the future could be dangerous because it could cause present decision-makers to assume the truth of those predictions, and act in a way that might conversely invalidate them. In every situation where alternatives can be considered, a decision must be made that accounts for the different factors. This might thus cause them to make decisions based on future projections, that inadvertently reduce the accuracy of the projections after the decision is made. In the 2016 Presidential Elections in the United States, presidential candidate and businessman Donald Trump was widely ridiculed for trying to run for the office of the president with no prior experience in governance. However, despite constant mockery on the local late night television programmes and forecasts predicting a landslide victory for the Democrats, Trump went on to become the Republican nominee and thereafter secure the presidency. The Brexit referendum had also seen a marginal majority vote to Leave although post-referendum polls signify a staggering sentiment to the alternative. Even in Singapore, it is speculated that this projection bias had allowed the opposition in Singapore to secure a record number of seats in parliament in 2020. This phenomenon of voters assuming the truth of the positive outcome and instead choosing to vote for the next best alternative in the belief that other voters will secure the desired victory anyway is evidently common worldwide, demonstrating that attempting to predict the future is dangerous as it predisposes decision-makers to assume the truth of that which is not.
Furthermore, attempting to predict the future and warn against an undesirable one might conversely inspire its realisation. In countless works of speculative fiction, from the likes of Orwell to Asimov, writers have warned of the development of surveillance states or unethical weaponised automated robots. Today, these scenarios have been or are quickly being realised around the world, from the social credit score system in China, to internationally coordinated surveillance by the Five Eyes, and rapid advancement in rapidly deployable unmanned weapons systems by the Defense Advanced Research Projects Agency (DARPA) in the United States. This is because in an effort to warn against such systems, such works of fiction including movies and television series as Black Mirror, might instead serve as a blueprint for malefactors to implement them effectively, for they already identify and overcome the shortcomings of such systems whilst conditioning and desensitising the public to the possibility of its realisation. Thus, attempting to predict and warn against undesirable outcomes could be dangerous for they might instead inspire the very things that they had sought to prevent.
From overpowering nature through industrialisation, to ruling over other people through systems of government, humanity has always sought control. It is then no wonder that we also seek to control our destinies, by unceasingly continuing to attempt to predict the future. The fruits of correctly predicting the future are immeasurable for it can save countless lives, but consequences of error are, too, equally devastating. It is only by balancing proactive efforts to prepare for the future and reactive ones to adapt to the developing unforeseen circumstances can the appeal of prediction be realised while its dangers dampened.
Comments
Post a Comment