BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Isaac Newton Institute Seminar Series
SUMMARY:Data Anonymisation and Quantifying Risk Competitio
n - Hiroaki Kikuchi (Meiji University)
DTSTART;TZID=Europe/London:20161205T140000
DTEND;TZID=Europe/London:20161205T142000
UID:TALK69312AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/69312
DESCRIPTION:One of the main difficulties is to be
able to design and formalize realistic adversary m
odels\, by taking into account the background know
ledge of the adversary and his inference capabilit
ies. In particular\, many privacy models currently
exist in the literature such as k-anonymit
y\, and its extensions such as l-diversity
and differential privacy. However\, these models a
re not necessarily comparable and what might appea
r to be the optimal anonymization method in one mo
del is not necessarily the best one for a differen
t model. To be able to assess the privacy risks of
publishing a particular anonymized data\, it is n
ecessary to evaluate the risk of the data a
nonymized from a common dataset. \; &
nbsp\; The main objective of the competition is p
recisely to investigate the strengths and limits o
f existing anonymization methods\, both from theor
etical and practical perspective. More precisely\,
by given a common dataset containing personal dat
a and history of online retail payments\, some att
endances of the competition attempt to anonymize t
he given dataset in a way where re-identification
of records of the dataset is impossible without lo
sing data utility. They are encouraged to try to r
e-identify the dataset anonymized by the other att
endances as well. \; With pre-defined utility
functions and re-identification algorithms\, the s
ecurity and the utility of the anonymized dataset
are automatically evaluated as the maximum re-iden
tification probability and the mean average error
between the anonymized data and the original datas
et\, respectively. Throughout the competition\, we
aim at gaining an in-depth understanding on how t
o quantify the privacy level provided by a particu
lar anonymization method as well as the achievable
trade-off between privacy and utility of the resu
lting data. The outcomes of the meeting will great
ly benefit to the privacy community.
LOCATION:Seminar Room 1\, Newton Institute
CONTACT:INI IT
END:VEVENT
END:VCALENDAR