EPFL
School of Computer and Communication Sciences
Summer Research Institute 2018
18.06.18 - 19.06.18
Overview
The Summer Research Institute (SuRI) is an annual event that takes place at the School of Computer and Communication Sciences of the École polytechnique fédérale de Lausanne, Switzerland. The workshop brings together renowned researchers and experts from academia and industry. It features a number of research talks by speakers from all around the world and is conducive to informal discussions and social activities.
The event is open to everyone and attendance is free of charge.
In case you plan to attend SuRI 2018, please register to facilitate the event's organization.
Program (Overview)
19.06.18
Program (Detailed)
18.06.18
Vulnerability reward programs, a.k.a. bug bounties, are a near-universal component of major software security programs. Today, though, such programs have three major deficiencies. They fail to provide strong technical (or other) assurances of fair payment for reported bugs, lack rigorous principles for setting bounty amounts, and can only effectively incentivize economically rational hackers to disclose bugs by offering rich bounties. As a result, rather than reporting bugs, hackers often choose to sell or weaponize them.
We offer a novel, principled approach to administering and reasoning about bug bounties that cost-effectively boosts incentives for hackers to report bugs. Our key idea is a concept that we call an exploit gap. This is a transformation of program code that prevents a serious bug from being exploited as a security-critical vulnerability. We focus on a broadly applicable realization through a variant of the classic idea of N-version programming. We call the result a hydra program.
As our main target application, we explore smart contracts, programs that execute on blockchains. Because smart contracts are often financial instruments, they offer a springboard for our rigorous framework to reason about bounty price setting. By modeling an economically rational hacker’s bug-exploitation, we show how hydra contracts greatly amplify the power of bounties to financially incentivize disclosure. We also show how smart contracts can separately enforce fairness for bug bounties, guaranteeing payment for correctly reported bugs.
We present a survey of well-known exploits to-date against Ethereum smart contracts, showing that multi-language hydra programming would have abated most of them. We also report on implementation of hydra Ethereum contracts.
Bio: Ari Juels is a Professor at Cornell Tech (Jacobs Institute) in New York City, and Computer Science faculty member at Cornell University. He is a Co-Director of the Initiative for CryptoCurrencies and Contracts (IC3). He was formerly Chief Scientist of RSA (The Security Division of EMC). His recent areas of interest include blockchains, cryptocurrency, and smart contracts, as well as applied cryptography, cloud security, user authentication, and privacy. Visit www.arijuels.com for more info.
In this talk, I will present Securify [https://securify.ch], the first security analyzer for smart contracts that is scalable, fully automated, and able to prove contract behaviors as safe/unsafe with respect to a given property. A key observation of this work is that it is often possible to devise precise patterns expressed on the contract’s data-flow graph in a way where a match of the pattern implies either a violation or satisfaction of the original security property. In terms of verification, a key benefit in working with patterns, instead of with their corresponding property, is that patterns are substantially more amenable to automated reasoning. Based on this insight, Securify uses a set of patterns that mirror the satisfaction/violation of relevant security properties. To check these patterns, Securify symbolically encodes the dependence graph of the contract in stratified Datalog and leverages off-the-shelf scalable Datalog solvers to efficiently (typically within seconds) analyze the code.
A demo version of Securify is publicly available at https://securify.ch. Since the release, the public version of Securify has had more than 22,000 uses and is regularly used by security experts to perform professional security audits of smart contracts.
Bio: Petar is a senior researcher at the Secure, Reliable, and Intelligent Systems Lab [https://www.sri.inf.ethz.ch/] at ETH Zurich and a Chief Scientist / Co-founder at ChainSecurity [https://chainsecurity.com]. His work centers around security and privacy (blockchain, networks, and system security) and combines techniques from the areas of programming languages, machine learning, and probabilistic programming. For more details, see http://www.ptsankov.com.
Computer hardware provides the ultimate building blocks that enables the evolution towards the digital society. Malicious manipulations at the IC level can compromise the security of an entire system, including safety-critical applications such as automotive electronics, medical devices, or SCADA systems. From an adversarial point of view, such attacks have the “advantage” that they tend to be almost impossible to detect. Moreover, the revelations of Edward Snowden have shown that hardware Trojans are a realistic tool in the arsenal of large-scale adversaries.
Even though hardware Trojans have been studied for a decade or so in the literature, little is known about how they might look, especially those that are particularly designed to avoid detection. In this talk we introduce several low-level hardware attacks against embedded system, targeting the two dominant types of hardware platforms, ASICs and FPGAs.
Bio: Christof Paar has the Chair for Embedded Security at Ruhr University Bochum and is affiliated professor at the University of Massachusetts Amherst. He co-founded CHES, the Conference on Cryptographic Hardware and Embedded Systems. His research interests include hardware security, efficient crypto implementations, physical-layer security and security analysis of real-world systems. He holds an ERC Advanced Grant in hardware security and is Fellow of the IEEE and the IACR. Christof has more than 200 publications in applied cryptography and is co-author of the textbook Understanding Cryptography (Springer). He co-founded Escrypt GmbH, a leading player in automotive security, which is now part of Bosch.
Support for range queries on numerical data is fundamental to database systems. This explains why several recent proposals for encrypted databases attempt to maintain this functionality “in the encrypted domain”. I will present a generic analysis of the security of such systems in two distinct attack settings. In the first setting, the adversary learns the access patterns, that is, which sets of records are returned in response to different range queries; this corresponds to an honest-but-curious database server. In the second setting, the adversary learns only the volumes of the responses, that is, the numbers of records returned in response to range queries. This corresponds to a network adversary who sees only encrypted communications between a client making queries and the database server. In the first setting, I’ll show that complete database reconstruction is possible in the “dense” case, given only O(NlogN) uniformly random range queries, where N is the number of distinct values of the data. I’ll also sketch a much more efficient approximate reconstruction attack and an attack that uses an auxiliary distribution for the data. In the second setting, I’ll explain how to carry out count reconstruction attacks, in which the attacker recovers the exact number of records of each value in the database. This leakage can represent a significant privacy breach. Here, under the assumptions that the ranges are uniformly distributed and that the data is sufficiently dense, I will explain why O(N^2 logN) range queries suffice.
The talk is based on joint work with Paul Grubbs, Marie-Sarah Lacharite and Brice Minaud.
19.06.18
For the last several years, Google has been leading the development and real-world deployment of state-of-the-art, practical techniques for learning statistics and ML models with strong privacy guarantees for the data involved. I’ll introduce this work, and the RAPPOR and Prochlo mechanisms for learning statistics in the Chromium and Fuschia open-source projects. Then I’ll present a new “exposure” metric to estimate the privacy problems due to unintended memorization in machine learning models—and how it can allow extracting individual secrets, such as social security numbers. Finally, I’ll give an overview of the practical techniques we’ve developed for training Deep Neural Networks with strong privacy guarantees, based on Differentially-Private Stochastic Gradient Descent and Private Aggregation of Teacher Ensembles.
Úlfar Erlingsson is a Senior Staff Research Scientist in the Google Brain team, currently working primarily on privacy and security of deep learning systems. Previously, Úlfar has led computer security research at Google, and been been a researcher at Microsoft Research, Silicon Valley and Associate Professor at Reykjavik University. Úlfar was co-founder and CTO of the Internet security startup Green Border Technologies and Director of Privacy Protection at deCODE Genetics. Úlfar holds a PhD in computer science from Cornell University.
Advertising now funds most of the popular web sites and internet services: companies including Facebook, Twitter, and Google all provide their services for free, in exchange for collecting data from their users. One of the primary explanations for the success of these advertising platforms is the ability for advertisers to target user attributes or behaviors. Recently, many advertising services have introduced a new mechanism that enables significantly more fine-grained targeting, commonly called personally identifiable information (PII)-based targeting or custom audiences. These allow the advertiser to select the exact users who should see their ads by providing the platform with the users’ PII or other uniquely-identifying information (e.g., cookies or advertiser IDs). For example, on Facebook, advertisers literally upload a CSV file containing up to 15 different types of PII; Facebook then matches that file against their database and allows the advertiser to advertise to just those users who match.
This talk provides an overview of the work my group has undertaken to better understand the security and privacy implications of PII-based targeted advertising. We have found a fundamental design flaw in Facebook’s custom audience service that can be used to inadvertently reveal both users’ PII and targeting attributes; we are now active engaging with Facebook to develop techniques to implement these services without leaking data. Additionally, we are exploring ways in which PII-based targeted advertising could be used by malicious advertisers to implement discriminatory ads that may violate U.S. federal laws, and have found numerous ways in which this could be done by malicious advertisers. Finally, we are exploring ways in which the PII-based targeting interface may allow us to gain visibility into the not-well-studied data broker ecosystem.
Despite the current big push for stronger privacy regulations and an increased awareness amongst customers for their privacy, many businesses are slower to adopt privacy-enhancing technologies (PETs) than one would expect. In some cases, organisations fear the consequences of violating privacy regulations but lack guidance on which PETs are fit to their data use case which prohibits them from using the data they hold. In other cases, organisations struggle to implement privacy technologies properly and share data that hasn’t been sufficiently protected which leads to a breach of privacy. In this talk we explore some of the technical challenges organisations are facing when trying to adopt PETs.
We will use differential privacy as an example for an emerging privacy technology that could answer many organisations’ need, to safely share aggregate statistics, but nonetheless is used by only a few. We will look at some of the reasons for its slow adoption and touch on questions such as: Where are the gaps between the applications described in the academic literature and the requirements coming from industry? What are the concepts that are hardest to understand for a non-expert audience? Following our overview of the problems, we will look at a concrete example of an industry use case where differential privacy was a fit, and the challenges that arose in explaining differential privacy and the practical challenges of implementing it. Finally, we will discuss how we can make it easier for organisations to embed privacy into their existing data architectures and work towards PETs that better meet the requirements of real-world data use cases.
Encrypted messaging provides strong confidentiality guarantees for user communications by ensuring that even service providers learn nothing about message contents. This end-to-end guarantee is at odds with service providers' goal of properly handling abuse, such as harassment of one user by another. Towards resolving this tension, Facebook deployed a technique called message franking, in which a sender must commit to message contents in a way that allows receivers to verifiably reveal message contents to the service provider in case of abuse.
In this talk I will describe our work providing the first in-depth treatment of message franking. We detail a new variant of symmetric encryption, called compactly committing authenticated encryption (ccAE), that captures the security and functionality previously implicit in message franking. We show that Facebook’s basic approach realizes a secure ccAE, but unfortunately it is slow relative to standard encryption algorithms and, for larger messages such as file attachments, Facebook used AES-GCM instead. We demonstrate how to exploit this gap: a sender can transmit an abusive message that is not reportable to Facebook by the receiver. We disclosed to Facebook, who patched their systems and awarded us a bug bounty.
We go on to propose the fastest ccAE scheme to date. It can encrypt and commit to a message using a single pass of a suitable cryptographic hash function, such as SHA-256. We also provide negative results ruling out faster approaches based on block ciphers.
This talk will cover joint work with Yevgeniy Dodis, Paul Grubbs, Jiahui Lu, and Joanne Woodage.
Speakers
Úlfar Erlingsson
(Google)
Harry Halpin
(Inria de Paris)
Nadia Heninger
(University of Pennsylvania)
Ari Juels
(Cornell Tech)
Alan Mislove
(Northeastern University)
Christof Paar
(Ruhr-Universität Bochum)
Kenny Paterson
(Royal Holloway)
Tom Ristenpart
(Cornell Tech)
Theresa Stadler
(Privitar)
Nick Sullivan
(Cloudflare)
Petar Tsankov
(ETHZ / ChainSecurity)
Directions
If you're flying in, Genève-Cointrin is the nearest international airport. By train, it takes ~45 minutes to reach Lausanne. Zürich Airport is about ~2.5 hours away by train. To get information on train schedules, please see the Swiss Rail website.
To reach the SuRI venue from the EPFL M1 metro station, please check the map below.