CERN Accelerating science

This website is no longer maintained. Its content may be obsolete. Please visit http://home.cern for current CERN information.

Home | Sitemap | Contact us
 
this site all CERN
A neutrino interaction in BEBC (the Big European Bubble Chamber) filled with a neon-hydrogen mixture

The use of basic science: What science to fund

by C.H. Llewellyn Smith,
former Director-General of CERN

I have argued that economic, as well as cultural, considerations lead to the conclusion that public funding should be primarily directed to basic, rather than applied, science. If however we appeal to economic arguments in this way, we cannot object to their use in discussions of the partition of funding between different areas of basic science. The problem is that "both forecasting and innovation are highly stochastic processes, so that the probability of correctly forecasting an innovation, being the product of two low probabilities, is, in theory, close to zero."

If Rutherford, who discovered the nucleus, could not foresee nuclear power, could a government committee do better? Who could have foreseen warm superconductors, fullerenes, or the World Wide Web? Earlier I suggested that Faraday might have foreseen the applications of electricity but in 1867, nine years after Faraday's death, a meeting of British scientists pronounced that "Although we cannot say what remains to be invented, we can say that there seems to be no reason to believe that electricity will be used as a practical mode of power". In a similar vein, it is well known that Thomas Watson, the creator of IBM, said in 1947 that a single computer "could solve all the important scientific problems of the world involving scientific calculations" but that he did not foresee other uses for computers.

This unpredictability, which I have argued is one reason that it is up to governments to fund basic science in the first place, also means that in practice it is probably impossible, and very possibly dangerous, to try to distribute funding for basic science on the basis of perceived economic utility. The traditional criteria of scientific excellence, and the excellence of the people involved, are probably as good as any, and in my opinion these are the criteria that should continue to be used – after all money is more abundant than brains even in this cost-conscious era.

The fact that results of basic research are unpredictable does not mean that economic incentives to find solutions to specific applied problems are futile. 19th century scientists sought methods for artificial fixation of nitrogen, but failed until the First World War deprived Germany of fertilisers, where upon a solution was quickly found. US science, technology and money met the political imperative to put a man on the moon before 1970. But it is important to understand when such incentives are likely to be effective and when they are not. President Nixon launched a battle against cancer, modelled explicitly on the success of the space programme, but it failed. The reason is clear enough. The physical principles involved in putting men on the moon were well understood before the space programme began, while our knowledge of the biological principles underlying the growth and mutation of cells is still limited.

This brings me to the funding of applied research. I have argued that, generally, governments should keep "away from the market", and fund areas that are 'public goods' because the returns are long-term, or not commercial, e.g. research on the environment or traffic control. Near market work can and should be left mainly to industry, which agrees according to J. Baruch on whose recent article (ref. 18) the following paragraph is based.

Big companies such as 3M, IBM, Siemens, Ford, etc. want to innovate with current technologies that can be priced and predicted accurately, and do not want the help of academics which would only force them to share the profits. Nor are academics generally interested in such collaboration. The exceptions are academics wanting to innovate with available technologies into order to develop new instruments for their research (a category which includes particle physicists). Here there is a considerable mutual benefit and a considerable synergy between technological innovation for profit and technological innovation for research. Indeed, according to Baruch "The people who have most to offer [to industry] are the dedicated research scientists, not the academic technologists or engineers, who do not wish to be distracted from their research in order to help solve common place technological problems".

There was a time when governments were, as advocated here, generally prepared to direct funding primarily to basic science on the basis of scientific excellence. In the UK, for example, the 1978 OECD Science and Technology Outlook found that "objectives for science and technology are not centrally defined ... it is considered that priorities in fundamental research are best determined by the scientists themselves...". This has changed. In the UK Government's 1993 White Paper on Science and Technology, which was based on the premise that science and technology should be harnessed for wealth creation, it was proposed to set priorities by a "technology foresight" programme. The mission was "to ensure that Government expenditure on science and technology is targeted to make the maximum contribution to our national economic performance and 'quality of life'". This might seem no more dangerous, if no more useful, than deciding only to invest in shares that are about to increase in price. In fact, however, although the resulting foresight reviews have had some positive results, the results are being used in ways that threaten basic science.

Such foresight reviews have been undertaken in other countries. First Japan in 1970, then France, Sweden, the Netherlands and Australia, which were then followed by an initially sceptical UK. No doubt others will follow, so it is worth saying something about them (see ref. 19 for a review of various foresight exercises).

Typically, the Foresight Process is that:

  • A 'short list' of important enabling sciences/ technologies is developed by some means
  • 'Experts' investigate the technologies on the list
  • Multi-disciplinary, multi-sectoral 'groups' discuss the results of the investigation
  • Reports of the groups' discussions are presented to decision makers.

For example, the recent UK 'Technology Foresight Programme', which was designed to look ahead 10-20 years at markets and technology, set up Foresight Panels on the following topics:

  • Agriculture, natural resources and environment
  • Manufacturing, Production and Business Processes
  • Defence & Aerospace
  • Materials
  • Chemicals
  • Construction
  • Financial Services
  • Food and Drink
  • Health and Life Sciences
  • Energy
  • Transport
  • Communications
  • Leisure, Education
  • IT and Electronics
  • Retail and Distribution

The output was 360 recommendations, with the following six overarching themes:

  • Communications and computing power
  • New organisms, products and processes
  • Advances in materials science, engineering and technology
  • Getting production processes and services right
  • Need for a cleaner, more sustainable world
  • Social trends - demographics and greater public acceptance of new technology.

Under these themes, 27 generic priorities were identified for development by the scientific and industrial communities in partnership. The report also identified five broad infrastructural priorities:

  • Knowledge and skills base
  • Basic research excellence
  • Communications infrastructure
  • Long-term finance
  • Continuous updating of policy and regulatory frameworks.

It seems to be generally agreed that the process served a very valuable role in bringing together people from industry, government and academia. Furthermore, the results are probably useful in identifying potential technological growth points on the time scale of interest to industry. However, for basic science there is a grave danger that the results will be used as a basis for "planning to avoid failure", and will unduly influence choices in funding.

Indeed this seems to have already happened, and British Research Councils are now required to consider, as one criterion, whether a research application may serve the priorities of foresight, although this was not originally intended. Such a criterion would clearly have prevented Thomson from discovering the electron!

 

previous page next page