Italian chocolate, Bangladeshi samosas, Chilean cornbread flans, Turkmenistani beef chapattis — the aromas of the world’s traditional foods mingle seductively along the mile of pavilions at Expo Milano 2015, this year’s world fair, dedicated to food. All delicious, but are they all safe? Will future foods be safe? Who is to judge — and on what evidence?

In Europe, the European Food Safety Authority (EFSA) decides whether a new food can be marketed, and its job (like that of all similar regulatory agencies around the world) is getting tougher. Technological advances are creating ever more novel foods.

The same technologies, along with the Internet and databases, have created more sources of information that may have a bearing on safety assessment: terabytes of molecular information from genomic or proteomic analyses, for example, or more-qualitative data generated through crowdsourcing.

Public trust in EFSA’s decisions is patchy and, until now, the agency has been slow to engage with the problems and solutions that these technologies offer. But at a three-day conference in Milan — attached to the Expo, and concluding on 16 October, World Food Day — EFSA announced a new commitment to take on the modern challenges. As it does so, it can start to repair its rather undeserved reputation for non-transparency.

Created in 2002 and based in Parma, Italy, the agency is probably best known as the independent scientific advisory agency to the European Union, whose independent scientific advice on the safety of genetically modified (GM) cereals has been serially rejected by many EU member states.

In most cases, EFSA’s science-based recommendations on the safety of new food products are accepted politically without too many questions. But the GM saga has encouraged a public distrust in its official scientific expertise. The scientific experts commissioned by EFSA over the years to analyse data on whether GM technologies or products are risky to health or the environment have seen their recommendations challenged time and again by protest groups that claim to have new data on dangers. As a one-off exception to the single-market rule, EU member states can decide on an individual basis whether they want to allow cultivation of a particular crop. Nineteen have registered their decisions to opt out, despite EFSA’s seal of safety.

EFSA does a good job of risk assessment and is reasonably transparent — but to stop distrust from seeping into all areas of its work it needs to do more. Risk assessment is a complicated science to convey to the public and is becoming even more complex with every new potential source of information. EFSA must be transparent about the exact data that it uses to make individual judgements and about the methods it uses to determine the degree of uncertainty around those judgements. It must also find ways to transparently assign appropriate weight to different data types that have been collected with varying degrees of scientific rigour.

The agency is on the case. This year, it carried out a public consultation on the communication of uncertainties, and it is rolling out a toolbox of methods to be systematically tested over the next year. Such methods may address, for example, how to weigh up evidence generated from computer modelling, from animal data generated in labs or from data gathered over social media — or how to assess whether a particular change observed in an organism is biologically relevant.

By definition, risk assessment will never be able to deliver simple answers. And concerned citizens, rightly, will never place blind trust in scientific expertise. That is why transparency about both data sources and analysis methods is so important. Different people may even interpret the same complex data set differently. Citizens just need to be given a clear picture of how a risk assessor has interpreted data — so that they can challenge or accept the final decision of the risk manager.