Postdoc Position at University of Cambridge

University of Cambridge is delighted to offer a postdoctoral associate position in human computer interaction. Successful candidate will have funding to work and study for 2 years in UK. This position is closed on January 31, 2019.

In collaboration with the Machine Learning Group, the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge invites applications for a postdoctoral Research Associate to work on the project ‘Trust and Transparency in AI’, which spans multiple disciplines including machine learning, law, psychology and policy. Funding for this position is available for 2 years in the first instance. It is an exciting opportunity for a talented individual to make a major contribution to the development of this field.

CFI is a new, highly interdisciplinary research centre addressing the challenges and opportunities posed by AI. Funded by the Leverhulme Trust, CFI is based at the University of Cambridge, with partners in the University of Oxford, Imperial College, and UC Berkeley, and has close links with industry and policymakers.

This is a new post within CFI’s Trust and Transparency project. This project, led by Dr Adrian Weller and Professor Zoubin Ghahramani, and involving partners at Imperial College and DeepMind, examines technical, legal and social mechanisms for ensuring AI systems are appropriately transparent and trustworthy.

(i) Transparency: studies ways to make interpretable the reasons for an AI’s predictions or decisions. There is an emerging field of research studying these issues within machine learning, though the psychology of how humans understand systems is also important.

(ii) Trustworthiness: seeks to understand when humans tend to trust machines, and when they should that is, what makes intelligent and autonomous systems appropriately trustworthy. This strand includes topics such as reliability and robustness, and may involve insights from machine learning, psychology, human computer interaction, anthropology and more.

(iii) Law and Governance: explores what policy instruments and standards can help ensure that AI systems are fair, appropriately transparent, trustworthy, interpretable, and respect privacy and human rights; what we mean by these concepts with regard to algorithms, and how should they be enforced.

Candidates are expected to have expertise in at least one of these strands, e.g. machine learning or law, including a relevant PhD. If the appointee has a machine learning background, they could be offered a joint appointment with the Machine Learning Group in the Department of Engineering.

Please upload in the Upload section of the online application (1) your CV; (2) a Covering Letter of no more than 1,500 words, outlining a proposed research direction, and explaining how your skills and proposal would contribute to this project in particular, and CFI more broadly; and (3) a Sample of Writing of no more than 5,000 words that demonstrates your suitability for this project. If you upload any additional documents we will not be able to consider these as part of your application


Get latest scholarships via your email! It's free!
Remember to check your email and active the subscription.
You can unsubscribe any time.
Copyright © 2019 All Right Reserved.