Understanding the effects of a new regulation is quite a significant quest that arises from the drafters while processing ideas to become effective. In fact, there are several effects that could be considered, including the legal impact, namely the modification of the legal background that takes place whenever we introduce a new norm, the economic effects, in particular the costs that are imposed by the law to organisms including the state, companies and other bodies, the social effects like the change of job statuses of citizens, or citizenship statuses of workers or other subjects. Understanding these aspects preliminarily is a good deem of many drafters. In this seminar we illustrate the research activities of the KREARTI research group in Verona, where we aim at delivering a prototype of a system to assist the drafter de iure condendo into developing a new law, so that she can value the future effects of the issue of that law, before to have it in force. The aforementioned goal can be achieved by means of a simulator, that, while observing an artificial society, digital twin of the actual society on which the law takes place, allows to measure the consequences of the issue of a new law, in order to assist the drafter in the process of designing the norm itself. The system is described and the research plan that shall bring us to the prototype is illustrated and discussed.
The purpose of this talk is to introduce generic attacks based on functional graphs. Over the past ten years, the statistical properties of random functions have been particularly fruitful tool to mount generic attacks. Initially, these attacks targeted iterated hash constructions and their combiners, developing a wide array of methods based on internal collisions and on the average behavior of iterated random functions. More recently, we (Gilbert et al., EUROCRYPT 2023) introduced a forgery attack on so-called duplex-based Authenticated Encryption modes which is based on exceptional random functions, i.e., functions whose graph admits a large component with an exceptionally small cycle. We have since then improved this attack Bonnetain et al., CRYPTO 2024) using so-called nested exceptional functions. We also improved several attacks against hash combiners using exceptional random functions. This talk will present a variety of generic attacks based on functional graphs against hash functions, hash-based MACs and AEAD modes.
PhD student at Université Versailles Saint-Quentin-en-Yvelines working under the supervision of Christina Boura, Henri Gilbert and Yann Rotella.
The talk introduces the research we do in CHAI Lab (Human-centered AI Lab) in the School of Informatics, University of Edinburgh. I will introduce three ongoing projects, while the main focus will be on the first one: 1. "A Collaborative Human-AI Approach to Mitigate Large-Scale Phishing Attacks", where we aim to develop AI-based tools to mitigate phishing attacks, 2. "Uncovering implicit inferences for improved relational argument mining", where we aim to analyse unstructured text to discover meaningful arguments and connections between them, 3. "Enabling Answerability in Sociotechnical Systems", where we develop a mediator agent tool to facilitate dialogues between organizations and users.
Nadin is a Lecturer in Artificial Intelligence in the School of Informatics at University of Edinburgh; and a Senior Research Affiliate at the Centre for Technomoral Futures, Edinburgh Futures Institute. Her research interests include human-centered AI, Privacy, Argument Mining, Responsible AI and AI Ethics. She received her PhD from Bogazici University in 2017, and held a postdoc position at King's College London prior to joining University of Edinburgh. Nadin regularly serves on the program committees for leading AI conferences such as AAMAS, IJCAI, AAAI and ECAI. In 2021, she was also a guest editor for Sociotechnical Perspectives of AI Ethics and Accountability in IEEE Internet Computing.
The Internet of Things (IoT) has become an integral part of our daily lives, revolutionising various industries with its innovative applications. IoT devices, such as smart appliances, security systems, and home hubs, have made our lives more convenient and efficient. They are used in sectors including healthcare, agriculture, and defence, providing benefits like enhanced efficiency, cost savings, and data-driven decision-making.
In the context of business integration, IoT cyber security is critical to the success and security of the integration. Cyber security concerns in any integration can be complex, particularly due to vulnerabilities and breaches that may arise from non-compatible systems and policies. These risks include due diligence, compliance, managing IoT systems, finance-related risks, and asset-related risks.
Several high-profile integration transactions have suffered due to cyber security issues. For instance, Experian's acquisition of Court Ventures and Verizon's purchase of Yahoo! were impacted by cyber security issues that surfaced after the transactions were announced. Furthermore, several IoT providers have been through critical cyber security breaches, which can significantly affect any business.
As an Assistant Manager at Evelyn Partners, I stand at the vanguard of our cyber security initiatives, offering expert consulting services that ensure the integrity of our clients' digital assets. My leadership in projects spanning IoT security, security analysis, and risk management has been pivotal in fortifying our cyber defences. Concurrently, I am honing my expertise through a PhD in advanced cyber security and AI at the University of Southampton. This scholarly pursuit not only ignites my passion for the field but also enriches the strategies I implement at Evelyn Partners. I am privileged to hold esteemed certifications such as SC-300, SC-100, SC-200, and CEH, underscoring my dedication to professional excellence. As a published author and certified trainer, I am fervently committed to disseminating knowledge and nurturing a culture of cyber security awareness. I eagerly anticipate engaging in a dialogue about the multifaceted nature of cyber threats and sharing actionable insights on how we can continue to excel in our cyber security endeavours,
Oh, and did I mention my diverse cultural heritage? It's an integral part of who I am, infusing a touch of global perspective and a dash of charm into my work. Join me as we explore the cutting-edge strategies that will keep us at the forefront of cyber security.
End-to-end encrypted secure messaging is a widely used class of cryptographic protocols enabling clients to communicate securely and asynchronously over untrusted network and server infrastructure. The term “secure” encompasses several security guarantees, including message authenticity and a robust level of message confidentiality, captured by the notions of post-compromise security (PCS) and forward secrecy (FS). Intuitively, these notions require that current messages remain secure against any adversary that controls all network traffic and can leak all participants’ local states both in the past (PCS) and in the future (FS).
A new class of messaging applications are based on underlying Continuous Group Key Agreement (CGKA) protocols, including the IETF’s upcoming Messaging Layer Security (MLS) standard. Most of the functionality, security and efficiency properties of these protocols is inherited directly from their underlying CGKAs, rendering CGKA a growing subject of cryptographic research in recent years. In this work we analyse the security and propose improvements for the CGKA protocol proposed by the MLS standard.
Yiannis Tselekounis is a Lecturer at the Department of Information Security at Royal Holloway, University of London. His research focuses on applied and theoretical aspects of cryptography, including the security of cryptographic protocols, leakage/tamper-resilient cryptography, and blockchains. Before joining RHUL, Yiannis was a postdoctoral researcher at Carnegie Mellon University and Faculty Fellow at New York University. He obtained his PhD degree from the Department of Informatics of the University of Edinburgh.
Sampling from a lattice Gaussian distribution has emerged as a common theme in various areas such as coding and cryptography. The de facto sampling algorithm—Klein’s algorithm yields a distribution close to the lattice Gaussian only if the standard deviation is sufficiently large. This talk is concerned with a new method based on Markov chain Monte Carlo (MCMC) for lattice Gaussian sampling, which converges to the target lattice Gaussian distribution for any value of the standard deviation. A number of algorithms will be presented, such as Gibbs and Metropolis-Hastings. A problem of central importance is to determine the mixing time. It is proven that some of these Markov chains are geometrically ergodic, namely, the sampling algorithms converge to the stationary distribution exponentially fast. Finally, an application to trapdoor sampling based on NTRU is demonstrated, potentially outperforming the FALCON signature scheme.
Cong Ling is currently a Reader (equivalent to Professor/Associate Professor) in the Electrical and Electronic Engineering Department at Imperial College London. His research interest is focused on lattices and their applications to coding and cryptography.
In order to protect data in the cloud, a database should be stored in encrypted form and queries executed without prior decryption. Searchable encryption schemes are being deployed in real-world applications to achieve this objective. They balance security and performance by providing efficient algorithms that, however, leak some information about the data. This talk considers range queries on encrypted multidimensional data and explores the feasibility of reconstructing the plaintext data by exploiting the information leakage from such queries. We analyze common types of leakage, like access pattern, i.e., individually encrypted records in query responses, and volume pattern, i.e., encrypted entire query responses. We also develop efficient searchable encryption schemes and assess both theoretically and experimentally their vulnerability to reconstruction attacks that exploit their leakage. By furthering the understanding of the security limitations of encrypted cloud data, our work enables developers to make more informed choices when deploying searchable encryption solutions.
Evangelia Anna (Lilika) Markatou is an assistant professor of cybersecurity at TU Delft. She received her PhD from Brown University, advised by Roberto Tamassia. She graduated with a Bachelor's degree in Electrical Engineering and Computer Science in 2016 from the Massachusetts Institute of Technology (MIT). In 2018, she received a Master of Engineering from MIT advised by Nancy Lynch. In her research, she aims to develop secure and private protocols that enable users to utilize cloud computing resources without sacrificing their data.
We study the concrete security for succinct interactive arguments realized from probabilistic proofs and vector commitment schemes in the standard model.
We establish the tightest bound on the security of Kilian’s succinct interactive argument based on probabilistically checkable proofs (PCPs). Then we show tight bounds for succint interactive arguments based on public-coin interactive oracle proofs (IOPs), for which no previous analysis is known. Finally we conclude that this VC-based approach is secure when realized with any public-query IOP (a special type of private-coin IOP) that admits a random continuation sampler.
Based on https://eprint.iacr.org/2023/1737.pdf, joint work with Alessandro Chiesa, Marcel Dall’Agnol, and Nick Spooner.
Ziyi Guan is a third-year PhD student at EPFL, supervised by Alessandro Chiesa and Mika Göös. She is interested in theoretical computer science, in particular complexity theory and cryptography.
Online scams are taking an emotional and financial toll on people around the globe. Artificial intelligence (AI) is already being used to create targeted campaigns and humans are not able to distinguish AI-generated from "human" content. Evidently, technical systems alone cannot prevent people from falling for online scams. We also need to update the human computer user. I will present three studies from my PhD that examined novel paradigms to improve people's ability to detect phishing e-mails - a quintessential type of online scam. The first study tested what psychological and demographic factors relate to people's likelihood to fall for phishing e-mails, using an experimental setting with behavioural tracking and a representative participants sample. The result gave rise to designing three e-mail security tools to scan e-mails in a usable fashion, which we evaluated in the second study. Third, I used the psychological concept of "self-projection" to design and test an adversarial phishing detection training. Indeed, engaging people with how phishing e-mails are created can improve their detection ability. I will end the talk with a reflection on the implications of our findings and future directions for research.
Sarah recently completed her PhD in Security & Crime Science at UCL with a full scholarship from the Dawes Centre for Future Crime. She has a background in psychology and neuroscience, and four years of experience in AI and data science consulting. These roles include developing machine learning models for credit card fraud detection and working on AI use cases for the Dutch MoD. She started programming websites in primary school, but a fascination with how the human mind works made her a psychological researcher in the first place. With her work, she aims to bridge the gap between cognitive and computer science.
Indistinguishability obfuscation allows one to turn a program unintelligible, without altering its functionality. Because it captures the power of most known cryptographic primitives and enables new ones, obfuscation is often referred to as being crypto-complete. In this work we investigate constructions of indistinguishability obfuscation, whose security can be reduced from potentially hard problems over lattices. Compared to other candidates, a purely lattice-based obfuscator has the advantage of being based on a single source of hardness and being plausibly post-quantum, enabling many applications in quantum cryptography.
We propose a new construction of lattice-based obfuscation whose security relies on an instance-independent assumption over lattices called the Equivocal Learning with Errors (LWE) assumption, which is closely related to the recently introduced Evasive LWE assumption. Our main technical ingredient is a new statistical trapdoor algorithm for equivocating LWE secrets over lattices with exceptionally short vectors, which may be of independent interest.
Ivy K. Y. Woo is a PhD student in cryptography at Aalto University, Finland since 2022. Her research focuses on cryptographic constructions from lattices. Recently she is working on advanced encryption such as attribute-based encryption. More generally, she is interested in constructing cryptographic objects from an algebraic perspective.
It can feel like every new consumer device comes with some kind of voice integration. While this is often a win for usability, freeing up our hands and eyes to do other tasks, there's also something inherently creepy/unsettling about devices that speak and listen to us.
In this talk I'll be covering a range of exploratory work on privacy and security issues with conversational devices, how these are intensified by the way that computer speech is processed in the brain, and how we might be able to navigate a path out of the mess we've gotten ourselves into.
William Seymour is a Lecturer in Cybersecurity and member of the Cyber Security Group in the Department of Informatics at King’s College London. Before coming to King’s as a postdoctoral researcher, he obtained a DPhil in Cybersecurity from the University of Oxford and an MEng in Computer Science from the University of Warwick.
William conducts interdisciplinary work at the intersection of security, privacy, HCI, ethics, and law using a combination of computational and social science research methods. His work explores people’s concerns about using AI systems, what values those systems should embody, and how they can better meet the needs of the people who use them. He has worked with a wide range of public sector and industry partners including Microsoft, BRE Group, and the Information Commissioner’s Office.
Polynomial commitments schemes are a powerful tool that enables one party to commit to a polynomial p of degree d, and prove that the committed function evaluates to a certain value z at a specified point u, i.e. p(u) = z, without revealing any additional information about the polynomial. Recently, polynomial commitments have been extensively used as a cryptographic building block to transform polynomial interactive oracle proofs (PIOPs) into efficient succinct arguments.
In this talk, we present new constructions of lattice-based polynomial commitments that achieve succinct proof size and verification time in the degree d of the polynomial. Extractability of the schemes holds in the random oracle model under the standard Module-SIS assumption. Concretely, the most optimized version achieves proof in the order of 600KB for d = 2^20, which becomes competitive with the hash-based FRI commitment.
Ngoc Khanh Nguyen is a lecturer at King's College London. His current topics of interests are (but not limited to) efficient lattice-based constructions and efficient post-quantum zero-knowledge proofs.
Previously, Khanh was a postdoctoral researcher at EPFL, hosted by Prof. Alessandro Chiesa. He obtained his PhD degree at ETH Zurich and IBM Research Europe - Zurich, supervised by Dr Vadim Lyubashevsky and Prof. Dennis Hofheinz. Before that, he did his undergraduate and master studies at the University of Bristol, UK.
Generative AI is revolutionizing the art industry by training models on billions of copyrighted artwork without consent, compensation or credit for the original artists. AI's ability to copy artists' styles from their copyrighted work is disrupting existing artists' income and livelihood, and discouraging aspiring art students from pursuing their dreams. In this talk, I will present our work “Glaze” that protects human artists from this threat by exploiting fundamental weaknesses in generative models. I will share some of the ups and downs of implementing and deploying an adversarial ML tool to a global user base, and reflect on mistakes and lessons learned.
In-person seminar
Shawn Shan is a PhD candidate in Computer Science at University of Chicago, advised by Ben Zhao and Heather Zheng. His research focuses on developing technical solutions to protect people from malicious uses of AI. His research has received the Best Paper and Internet Defense Award in USENIX, and covered by media outlets such as the New York Times, BBC, Scientific American, and MIT Tech Review.
Privacy is relevant for virtually any application handling data. Therefore, studying privacy is critical, especially considering the increasing digitalization of applications. In order to protect sensitive information, it is crucial to have strong guarantees that systems respect privacy. New digital applications need to be secured and protected against any misuse, such as surveillance, profiling, stalking, or coercion (e.g., a doctor should be able to make prescriptions without pressure from pharmaceutical companies).
One way to formally specify how systems and applications work is to model them with security protocols. They are protocols defining how messages are exchanged between several parties, often relying on cryptographic operations. In this talk, I will introduce the notion of $(\alpha, \beta)$-privacy in security protocols and illustrate the problem with examples. I will present recent research about automated verification of privacy and mention important challenges.
I am a PhD student under the supervision of Sebastian Mödersheim and Luca Viganò. I am working in the Software Systems Engineering section at DTU Compute, the department of Applied Mathematics and Computer Science of the Technical University of Denmark. Previously I completed there an MSc in Computer Science and Engineering with a focus on safety and security by design.
My research topic is the study of privacy using formal methods and logic, and in particular automated verification techniques. The goal is to better understand the actual privacy guarantees of digital applications so that we can develop technology respecting peoples' rights to privacy.
The widespread occurrence of mobile malware still poses a significant security threat to billions of smartphone users. To counter this threat, several machine learning-based detection systems have been proposed within the last decade. These methods have achieved impressive detection results in many settings, without requiring the manual crafting of signatures. Unfortunately, recent research has demonstrated that these systems often suffer from significant performance drops over time if the underlying distribution changes---a phenomenon referred to as concept drift. So far, however, it is still an open question which main factors cause the drift in the data and, in turn, the drop in performance of current detection systems.
To address this question, we present a framework for the in-depth analysis of dataset affected by concept drift. The framework allows gaining a better understanding of the root causes of concept drift, a fundamental stepping stone for building robust detection methods. To examine the effectiveness of our framework, we use it to analyze a commonly used dataset for Android malware detection as a first case study. Our analysis yields two key insights into the drift that affects several state-of-the-art methods. First, we find that most of the performance drop can be explained by the rise of two malware families in the dataset. Second, we can determine how the evolution of certain malware families and even goodware samples affects the classifier's performance. Our findings provide a novel perspective on previous evaluations conducted using this dataset and, at the same time, show the potential of the proposed framework to obtain a better understanding of concept drift in mobile malware and related settings.
This seminar will be a dry-run of the talk to be given at the AISec workshop 2023, co-located with ACM CCS.
Theo Chow is a dedicated PhD candidate under the guidance of Professor Fabio Pierazzi. He is an active member of the Cyber Security Group within the Department of Informatics at King’s College London. Prior to embarking on his doctoral journey at King's, Theo completed his Master of Science (MSc) in Advanced Microelectronics and Computer Systems at the University of Bristol, following a Bachelor of Engineering (BEng) in Electronics Engineering at the University of Warwick.
Theo's research passion lies at the intersection of eXplainable AI (XAI), Cybersecurity, Concept Drift, and Machine Learning Model Robustness. His work addresses the growing concerns surrounding the reliability of Machine Learning models and delves into how XAI can offer solutions. He is dedicated to demystifying the 'black box' nature of these models, ultimately empowering practitioners to understand and trust these increasingly influential systems.
Over the past 20 years or so, the world has seen an explosion of data. While in the past, controlled experiments, surveys, or compilation of high-level statistics allowed us to gain insights into the problems we explored, the Web has brought about a host of new challenges for researchers hoping to gain an understanding of modern socio-technical behavior. First, even discovering appropriate data sources is not a straightforward task. Next, although the Web enables us to collect highly detailed digital information, there are issues of availability and ephemerality: simply put, researchers have no control over what data a 3rd party platform collects and exposes, and more specifically, no control over how long that data will remain available. Third, the massive scale and multiple data formats require creative analysis execution. Finally, modern socio-technical problems, while related to typical social problems, are fundamentally different and, in addition to posing a research challenge, can also disrupt researchers' personal lives.
In this talk, I will discuss how our work has overcome the above challenges. Using concrete examples from our research, I will delve into some of the unique datasets and analyses we have performed, focusing on emerging issues like hate speech, coordinated harassment campaigns, and deplatforming, as well as modeling the influence that Web communities have on the spread of disinformation, weaponized memes, etc. Finally, I will discuss how we can design proactive systems to anticipate and predict online abuse and, if time permits, how the "fringe" information ecosystem exposes researchers to attacks by the very actors they study.
Emiliano De Cristofaro is Professor of Security and Privacy Enhancing Technologies at University College London (UCL). He received a PhD in 2011 from the University of California, Irvine, advised by Gene Tsudik. Before joining UCL in 2013, Emiliano was Research Scientist at Xerox PARC. His research background includes privacy-oriented (applied) cryptography and systems security; currently, he focuses on privacy in machine learning and cybersafety. Emiliano has co-chaired the PETS Symposium and the security/privacy tracks at WWW and ACM CCS. With his co-authors, he received distinguished paper/honorable mention awards from ACM CCS, NDSS, ACM IMC, and ACM CSCW. Ostensibly, he only refers to himself in the third person when writing seminar bios.
This talk will offer an overview of two innovative methodologies designed to understand what can be done when malicious or potentially unwanted software runs in the cloud. In the absence of a binary to be analyzed, traditional static and dynamic analysis becomes useless. In particular, I will present our latest IMC’22 paper focusing on the Slack chatbot ecosystem.
Guillermo Suarez-Tangil is an Assistant Professor at IMDEA Networks Institute and a Ramon Y Cajal Fellow. His research focuses on systems security and malware analysis and detection. In particular, his area of expertise lies in the study of smart malware, ranging from the detection of advanced obfuscated malware to the automated analysis of targeted malware. Guillermo also holds a position at King’s College London (KCL) as an Assistant Professor, where he has been part of the Cybersecurity Group since 2018. Before joining KCL, he was a Senior Research Associate at University College London (UCL) where he explored the use of program analysis to study malware. He has also been actively involved in other research directions aiming at detecting and preventing Mass-Marketing Fraud and security and privacy issues on the social web.
In recent years, we have witnessed a surge in the growth of technically sophisticated Advanced Persistent Threat (APT) attacks and their impact on industry, governance, and democracy. APT attacks are characterized by long-running complex attack chains that utilize heterogeneous files and sophisticated tactics, techniques, and procedures (TTPs). One of the most critical questions in this context is identifying the threat group behind the attack, which is known as APT attribution. Group attribution is helpful for defenders as it helps them prioritize their response and remediation efforts. In this talk, we introduce ADAPT, a static machine learning-based approach to APT attribution, which automates and standardizes the attribution process across heterogeneous file types. We present the findings and insights obtained from applying ADAPT to a newly crafted APT dataset consisting of 5,989 real-world APT samples from approximately 162 threat groups, spanning from May 2006 to October 2021.
Aakanksha is a second-year doctoral student at TU Wien’s Security and Privacy Research Unit. Before joining TU Wien, Aakanksha did her Master’s degree in Computer Science from the University of Utah, focusing on Cybersecurity. Following that, she worked as a Security Software Engineer at Microsoft, Redmond, USA. While working at Microsoft, Aakanksha often engaged in purple-team activities where they reverse-engineered malware binaries and emulated external adversaries (APT groups), such as APT29 and Fin7, to improve security detection and response. The experience drew her to the research area of malware analysis and attribution of advanced adversary attacks.
With the growing processing power of computing systems and the increasing availability of massive datasets, machine learning algorithms have led to major breakthroughs in many different areas. Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance and render learning-based systems potentially unsuitable for security tasks and practical deployment. In the talk, we look at this problem with critical eyes. First, we identify common pitfalls in the design, implementation, and evaluation of learning-based security systems. We conduct a study of 30 papers from top-tier security conferences within the past ten years, confirming that these pitfalls are widespread in the current security literature. In an empirical analysis, we further demonstrate how individual pitfalls can lead to unrealistic performance and interpretations, obstructing the understanding of the security problem at hand. As a remedy, we propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible. Furthermore, we identify open problems when applying machine learning in security and provide directions for further research.
We report several practically-exploitable cryptographic vulnerabilities in the Matrix standard for federated real-time communication and its flagship client and prototype implementation, Element. These, together, invalidate the confidentiality and authentication guarantees claimed by Matrix against a malicious server. This is despite Matrix’ cryptographic routines being constructed from well-known and studied cryptographic building blocks. On the one hand, one of our attacks proceeds by chaining three attacks to achieve a full authentication and confidentiality break. On the other hand, the vulnerabilities we exploit differ in their nature (insecure by design, protocol confusion, lack of domain separation, implementation bugs) and are distributed broadly across the different subprotocols and libraries that make up the cryptographic core of Matrix. Together, these vulnerabilities highlight the need for a systematic and formal analysis of the cryptography in the Matrix standard.
Martin works across the field of cryptography and recently joined King’s College London as a professor.
Machine learning is increasingly used in security-critical applications, such as malware detection, face recognition, and autonomous driving. But can we trust machine learning? Unfortunately, the answer is No
. Learning methods are vulnerable to different types of attacks that thwart their secure application. However, most research has focused on attacks in the feature space of machine learning.
In my talk, we will learn that we should think beyond the feature space when thinking about the security of machine learning. First, the problem space with real-world objects such as PDF files or malicious code should be considered. Real attacks are possible but require specialized techniques. Second, the mapping from problem to feature space can introduce a considerable vulnerability in learning-based systems. Using the example of image scaling, we will examine how an adversary can exactly control the input to a learning algorithm. Third, we will also learn that the feature space also has an inherent connection to the media space of digital watermarking.
Erwin Quiring is a postdoctoral researcher at the Ruhr University Bochum as part of Germany's Excellence Cluster CASA. His main research focus lies in the intersection between machine learning and security, with topics such as malware detection, deep fake detection, or adversarial learning.
"Antivirus is death" and probably every detection system that focuses on a single strategy for indicators of compromise. This famous quote that Brian Dye --Symantec's senior vice president-- stated in 2014 is the best representation of the current situation with malware detection and mitigation. Concealment strategies evolved significantly during the last years, not just like the classical ones based on polymorphic and metamorphic methodologies, which killed the signature-based detection that antiviruses use, but also the capabilities to fileless malware, i.e. malware only resident in volatile memory that makes every disk analysis senseless. This review provides a historical background of different concealment strategies introduced to protect malicious --and not necessarily malicious-- software from different detection or analysis techniques. It will cover binary, static and dynamic analysis, and also new strategies based on machine learning from both perspectives, the attackers and the defenders.
With the rapid growth of technology, the concept of identity had to evolve towards a new paradigm: digital identity. This requires the establishment of digital identity management protocols to handle all the related processes. The design of these protocols is a very sensitive process that should be supported by specific methodologies to help security designers reach the best trade-off between all the dimensions at stake. In this seminar, we will dive into identity management protocols by both providing some relevant examples and describing a security methodology that we have developed to evaluate the security and risk of these protocols during the design process.
Marco Pernpruner is a PhD student in Security, Risk and Vulnerability, jointly offered by the University of Genoa and Fondazione Bruno Kessler (Italy). He received the BSc degree in Information and Business Organisation Engineering from the University of Trento in 2016, and the MSc degree in Computer Science and Engineering from the University of Verona in 2019. He is currently visiting King’s College London under the supervision of Prof. Luca Viganò. His research focuses on digital identity, with a specialization in the design, security and risk assessment of multi-factor authentication and fully-remote enrollment procedures.
The Cosmic Ray Defence is introduced and its plausibility in various cybercrime scenarios is evaluated quantitatively.