Scientific Methods and the Validation of Scientific Questions

I am currently putting together a pdf document with the title: On the need for a global academic internet platform. In this randform post you can find the first section. The therein contained links are leading to posts, which partially went into the below section or which will go into the third section.

Hence below is an excerpt of the pdf Draft containing some new arguments for such a platform and a little of the old arguments. In particular the suggestions for a workflow (scroll down to last subsection) are mostly new. Again-I put this here on the blog in order to encourage discussion about it.

->pdf draft of February 26


Scientific method, knowledge accumulation

In this section I would like to briefly discuss the role of the scientific method and the validation of scientific questions mainly at the example of math/computer science and physics.

The purpose of this is to explain to a nonscientific audience why the desicion process in science is different from that in politics and society. However the procedure of how “the” scientific method works gives us also indications of how the proposed internet platform may work.

Due to the logical nature of math (the language for physics) the evaluation of a given scientific question/hypothesis is relatively straightforward.

In particular mathematics provides even sometimes notions on wether a question is solvable at all, on how complex a question may be or on how random an answer is.

Mathematical assertions can be checked for logical consistency. Interestingly the computer has become more and more important in this in the last years.

Assertions in physics can to a great extend be checked by measurements and observations. Physical models/hypothesis/theories (i.e. the mathematical description of physical entities) have to be validated in accordance with these measurements/observations and in accordance with the mathematics describing them.

The whole process must be objective so that the scientist does not bias the interpretation of the results, which includes that the measurements/observations must be in principle replicable in order to verify them.

In short there is a quite established widely accepted method for checking hypothesis` in mathematics and physics, which applies to a great extend also to other sciences like biology and chemistry and partially also to humanities, like social sciences and also to economics.

The way however how to set up a hypothesis and the question what questions should be asked is usually not straightforward, it is a process which involves imagination, intuition and sometimes also scientific fashions.

Scientific method, review of results

The review/verification process of a result includes various formal steps including the prepublication of results, which reaches from internal discussions with trusted experts to putting them on an electronic archive system as socalled e-prints.

Lately there had been also some examples, where -mostly well established faculty members- put drafts of their scientific results out for discussion on websites, as a kind of preprepublication. However this presupposes that the work had reached a certain stage of maturity and that the authors are prepared for discussions.

The archive arXiv.org, which was founded in 1991 takes a prominent role in that, i.e here almost all math and physics publications are freely prepublished and sorted in a content-classification system.

The final step of a publication is then usually done in a peer reviewed journal, where peer review means that the work is independently reviewed by usually at least 2 anonymous experts (the author is usually not anonymous). The anonymity guarantees to a certain degree that the work is investigated solely in terms of content and not in terms of things like personal sympathy. Whereas it should be remarked that is quite unusual that “negative results”, i.e. cases where research lead e.g. to no result are published at all, although the description of these cases could constitute valuable information.

This is (very) roughly what people mean by the scientific method of knowledge acquisition (please see also the wikipedia portal on “scientific method”). In particular this method has been designed to ensure objectivity and designed to avoid in particular biases, like the confirmation bias, whereby we seek and find confirmatory evidence in support of already existing beliefs and ignore or reinterpret disconfirmatory evidence.

But as mentioned before “the” scientific method is not a fixed recipe. It is an ongoing cycle, constantly developing more useful, accurate and comprehensive models, hypothesis and methods.

By the above it is also clear that this system can have failures, in particular the sensitivity of the system to funding and rewards is a delicate issue.

Scientific method, failures, scientific integrity

The purpose of this section is to decribe the sensitivity of the scientific method with respect to funding in order to provide an insight in possible vulnerabilities of the platform.

The sensitivity of the scientific method to funding starts with the choice of questions. If research has to lead to certain results in a predefined way (like via timelines in a research proposal) then questions will be made with respect to wether this can be achieved at all or not, which implies that questions which are presumably too hard to solve will be left out in such proposals. This holds also true for frequent evaluations, where usually only “positive” and “final” achievements are awarded (which are in terms of science funding often counted as number of publications and number of patents), i.e. again – in terms of evaluation you better choose a subject which has some chance of being successful. Hence the shorter and more limited the proposal/evaluation cycles are, the more results will be “small” results.

Small results or “almost fully satisfying” results can sometimes be useful e.g. in industrial mathematics, where an intelligent mathematical optimization can do sometimes wonders and may already be a sufficient progress considering the invested time and money. But think of how long it took to prove the Fermat conjecture (about 400 years) and imagine how many people would try to apply for a grant proposal in a similar case.

Funding problems can also work as a test case for scientific integrity, i.e. wether the principles of the scientific method are violated. This needs no further explanation – also scientists may be corrupt. However the scientific method makes corruption much harder than in ordinary life. So funding problems result usually rather in unpleasant interactions among scientists than in wrong assertions. However funding policies may distort the overall picture, like if you look for evidence only in a certain direction then this may lead to insufficient and even wrong conclusions.

Despite the usually high integrity of scientists the problem of scientific integrity has to be mentioned -especially in context of industry/politically funded expertises. The more one individuum or a small research group is dependent on certain funds the higher is the danger of violation of scientific integrity. Likewise this indicates that the more diverse and the higher the number of involved groups in the discussion of a scientific question is, the more integrity can be expected. Furthermore the more open the process of developping a solution is the more peer review will automatically ensure more integrity. Again here the dependence on funding/rewards may result in fears that collegues snatch away intermediate results etc. and thus in hiding the work.

For the case of the electronic platform this implies firstly that the running expenses of the platform has to be made by the universities alone, wether they get reimbursed by an overall higher budget is another question. An initial extra fundraise to install the technics, etc. however may be useful. Moreover not much direct research, which depends on fundings should be involved with the platform (too expensive), but available information should be rather gathered for an evaluation process. Secondly, discussions should involve as many work groups as sensible. The more wide-spread and more diverse the groups the harder it will be for lobbyists to influence. Scientific discussion should be as open as possible. However it may be necessary to hide work and contributors for preventing lobbyism or for other reasons. Experiences with information blockades after nuclear accidents are an example, where e.g. politics interfered with the pure demand for scientific information. Thirdly the sort of questions to be adressed has to be of public interest, where public may include the scientific public only. Particular benefits of companies have to be avoided or at least discussed openly, as they probably cant be avoided sometimes, but this holds true in general for scientific results.

Further Implications for an electronic platform

There is another aspect one should mention. The scientific method deals with scientific questions. Often the scientific questions to be discussed are in strong relation to e.g. economical, juridicial and ethical questions. A natural-scientific judgement which involved the scientific method may need to be evaluated or juxtaposed in terms of considerations with respect to (economic, political) realizability and ethics. An example: The use of genetically modified plants may impose severe health risks. We may come the point where one has to use genetically modified plants in order to feed the planet. (it is not necessary up to now I think!). So this question has to be discussed in conjunction with these constraints or at least juxtaposed to them. The humanities sections of universities are a very valuable partner in doing this.

This is why the platform could also further interdisciplinarity.

A workflow proposal for the electronic platform

The general schemes of how the scientific method works gives us indications about the possible design of the proposed internet platform/network. The exact technical realization of such a platform is indeed a sensitive issue and beyond the scope of this article. Here a proposal for a general workflow scheme:

1. Define questions

The notion of the platform working as a “parliament” means that there are predefined questions. These questions come from society and science itself. Just like laws or societal questions are discussed in a parliament. So society is basically doing the “what-question-should-be-to-be-answered-search” part of the scientific method.

The “parliament members” or “experts” are faculty members of universities. The parliament itself is run by universities. It would actually be good if the compilation of questions would be preprocessed and discussed by a forum which is open to everybody, just like wikipedia.

The questions which are suitable for investigation need to be stated in a precise manner, i.e. scientists may have to reformulate them or dissect them in terms of scientific validation, and economical, political and moral subquestions. The questions relevance has to be established, it has to be ensured that particularism is avoided and it has to be decided wether a question will be made into an official question and as such published on the platform.

2. Supply expertise and data

Experts need to supply data to a given official question, which means available scientific material. This presupposes an initial choice of experts, which may supervise the gathering of material and of further experts. Hence this process is similar to the work of an editor of a journal, who assigns communicating faculty and these in the turn assign referees for a work. If global experts are electronically registered and when their expertise is classified via keywords, like in the Mathematics Subject Classification MSC then the expert retrieval is fairly simple. However the validation of a question may not necessarily be confined to experts. Non experts could a priori have the possibility to contribute, at least by commenting and providing data. Often e.g. students are very well if not better informed and may contribute at least for data gathering. It should be possible to invite experts from outside the university faculty especially e.g. members of research institutes.

If questions are related to issues of specific nations then it has to be decided wether this is a national or even more local question. If this is not the case it has to be decided wether there has to be a nationality balance in accordance to maybe UN proportions.

3. Formulate answers/hypothesis`

Based on their data/expertise experts may formulate answers (hypothesis´) and “prepublish” them in a “library” next to a discussion forum corresponding the question or if this is necessary assert that no hypothesis can be made, as e.g. there is not enough material etc. Depending on the question, conferences may be needed (like in the case of the climate change discussions). Here NGOs may play an important role.

4. Evaluate answers, publish them

Based on the discussion preliminary or final answers can be formulated and officially published as such. The answers can be explained via the gathered data. The presentation of the results will certainly need a good collaboration with science communicators/journalists in order to avoid communication problems like for example it happened in the report about childhood cancer in the vicinity of nuclear power plants.

In the generic case together with the answer it should be made visible how many experts are in favour for which answer and to what extend. This is what I would call a “vote” or “poll”. It may be that experts decide that some aspects are more important than others.

The exact questions of how to poll and decide on the results has to be decided by the experts, however there could be simplified voting mechanisms.
In particular simple questions could be decided to be answered via e.g. voting on multiple choice questions. This may sound crude especially in context of the careful design of the scientific method, but it may often be sufficient for certain questions or at least for intermediate decisions.

Depending on the question, the vote as well as the electronic discussion of results itself must in principle bear the possibility to be made anonymous in order to saveguard the involved scientists as pointed out earlier.

5. Set timeline

A timeline for further investigations and validations has to be given next to the “answers”. It should be discussed wether and how one could adjust these to strategies like adaptive management.

Further comments

So in principle the workflow of the platform is not so much different from the scientific day-to-day practise in that its workflow resembles the workflow of the scientific method.

However there are differences to the day-to-day practise. I would like to emphasize some of them, as well as emphasize some other relevant points:

–The platform would collect and sort structural data from all universities worldwide and thus provide a worldwide academic network in a electronic-semantically connected way.

–Besides being an organisatorial framework the platform will have a task, namely to represent the global network of universities and to provide answers to scientifically difficult societal questions. The questions will come -at least in part- from society, i.e. in particular the “answering service” by such a platform will be seen as a service of academia to society.

–The amount of answers and the work which will be involved with them is freely adjustable. I.e. if universities dont have enough resources they may decide to terminate the service. Likewise if societies are not content with the service they will proceed in cutting down science budgets.

–The answers to the given questions will in principle already be existing, and not researched i.e. the answers should reflect the current scientific knowledge rather than constitute research. I.e. the main value of the platform is that experts provide and connect information and expertise, rather than that they do research. This doesn’t exclude of course that this may involve small short term research or that further research may be necessary (see timeline).

–The plattform could be used as a call-in instrument, i.e. if scientists are concerned about certain questions, they could call in collegues rather easiliy. Since everything is electronic, these calls can be simply categorized according to relevance, local connectivity etc. Thus mailing lists could be assembled very easily.

–Since the infrastructure of universities is used (computers, rooms), the running expenses can be kept relatively small.

–Most information which is needed for the platform is already existing, electronically available information (like lists of faculty members, e-prints, open access journals etc.). This information needs to a great extent only be connected. This is mainly a technical challenge than an organisatorial one. In general, organisatorial regulations should be held minimal and scientists should be trusted in their ability of self-control and self-organisation (given acceptable living and working conditions).

Leave a Reply


The below box is for leaving comments. Interesting comments in german, french and russian will eventually be translated into english. If you write a comment you consent to our data protection practices as specified here. If your comment text is not too rude and if your URL is not clearly SPAM then both will be published after moderation. Your email adress will not be published. Moderation is done by hand and might take up to a couple of days.
you can use LaTeX in your math comments, by using the [latex] shortcode:
[latex] E = m c^2 [/latex]