Technology and international security

Etablissement : ESPOL European School of Political and Social Sciences

Langue : Anglais

Période : S3

none

This course will examine the relationship between technological advancements and international security. It will look into how emerging technologies transform the nature of conflict, cooperation, and power dynamics in the global area. In particular, the course will look into space technologies and artificial intelligence, explaining important concepts such as technological imperative, technological convergence and duality. Given the need to touch on technical aspects of technology during the course, it is important that students do the readings in advance.


Course evaluation


The students’ performance in this course will be assessed based on these criteria:



  • Attendance, contribution to in-class debates, and participation (20%).

  • Group presentation (40%).

  • Final exam (50%).


Aims of the Course


After this course, students will be able to:


– To understand the definition and impact of emerging and disruptive technologies (EDTs) in global geopolitics and develop a historical understanding of the relationship between technology and international security.


– To explore and understand the underpinning theories affecting security in outer space as well as understanding the functioning of other EDTs such as artificial intelligence.


– To navigate the main theories and debates surrounding the impact of EDTs in the battlefield.


– To master fundamental terminology affecting EDTs


– To assess the political and security importance of emerging technologies



Course Structure



Class 1 (25 September 2024) Introduction to Technology & International Security


Required readings:


Csernatoni, R., & Martins, B.O. (2023) Disruptive Technologies for Security & Defence: Temporality, Performativity and Imagination. Geopolitics, 29(3).


Buzan B., & Hansen, L. (2009) The Evolution of International Security Studies, Cambridge: Cambridge University Press. Chapter 3.


Optional reading:


Sagan, S.D., & Waltz, K.N., (2003) The Spread of Nuclear Weapons: A Debate Renewed. New York: W.W. Norton & Company. Chapter 1.



Class 2 (02 October 2024) Outer Space security: strategic and physical perspectives on space.


Required readings:


Dolman, E. (1999). Geostrategy in the space age: An astropolitical analysis. Journal of Strategic Studies 22 (2-3), pp. 83-106.


Al-Rodhan, N.R.F. (2012). Meta-Geopolitics of Outer Space: An Analysis of Space Power, Security and Governance. (Basingstoke: Palgrave Macmillan). Chapter 3: Space Technologies and Meta-Geopolitics, pp. 44-68.



Class 3 (09 October 2024) NewSpace and International Security


Required readings:


Paikowsky, D. (2017). What Is New Space? The Changing Ecosystem of Global Space Activity. New Space 5 (2), pp. 84-88.


Fukushima, Y. (2013). Debates over the Military Value of Outer Space in the Past, Present and the Future: Drawing on Space Power Theory in the U.S., NIDS Journal of Defense and Security, http://www.nids.mod.go.jp/english/publication/kiyo/pdf/2013/bulletin_e2013_4.pdf.


Optional reading:


Tellis, A. (2007). China´s Military Space Strategy. Survival 49 (3), pp. 41-72.



Class 4 (16 October 2024) Characterisation of artificial intelligence models: debunking myths from a scientific perspective


Required readings:


Buchanan, B., G. (2006). A (Very) Brief History of Artificial Intelligence. AI Magazine 26(4). http://www.aaai.org/ojs/index.php/aimagazine/article/view/1848/1746.


LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature 521. http://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf



Class 5 (23 October 2024) “Hacking AI”—safety in Artificial Intelligence models: human alignment and other socio-technical aspects.


Required readings:


Goodfellow, I. (2018). Making Machine Learning Robust Against Adversarial Inputs. Communications of the ACM 61(7). https://cacm.acm.org/magazines/2018/7/229030-making-machine-learning-robust-against-adversarial-inputs/pdf


Richards, N. M., & King, J. H. (2013-2014). Three Paradoxes of Big Data. Stanford Law Review Online, 66, 41-46


Optional readings:


Leike, J. (2018). Scalable agent alignment via reward modelling. https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84


Amodei, D., et al. (2016). Concrete Problems in AI Safety. https://arxiv.org/pdf/1606.06565.pdf



Class 6 (06 November 2024) Perspectives on security and artificial intelligence: global geopolitics and emerging doctrines.


Geopolitics of