The range of techniques that make up data science – new tools for analyzing data, new datasets, and novel forms of data – have great potential to be used in public policy. However, to date, these tools have principally been the domain of academics, and, where they have been put to use, the private sector has led the way.
At the same time, many of the uses of machine learning have been of fairly abstract interest to government. For example, identifying trends on Twitter is helpful but not inherently valuable. Projects showcasing the power of new data and new tools, such as using machine learning algorithms to beat human experts at the game Go, or to identify the prevalence of cat videos supporting one political candidate or another, have been some distance from application to government ends. Even when they have been applicable, often they have not been adequately tested in the field and the tools built from them have not been based on an understanding of the needs of end users.
BIT have conducted eight such exemplars, focused on four areas: Targeting inspections, improving the quality of randomized controlled trials (RCTs), helping professionals to make better decisions, and predicting which traffic collisions are most likely to lead to someone being killed or seriously injured. This report covers six of these eight exemplars.
Link to publication