Неэргодическая экономика

Авторский аналитический Интернет-журнал

Изучение широкого спектра проблем экономики

Russian Practices in Rating the Effectiveness of University Programs

Russian universities do not do well in international rankings. Recent attempts in Russia to create different forms of ranking are aimed at reflecting what strengths universities there may have, but it is up to the universities themselves to find ways to better characterize themselves in existing systems of ranking.

The need for the ranking of university programs

A competitive environment began to take shape with the transition to the market in Russia. This process also affected the country’s higher educational institutions. In the twenty-first century a new worldwide instrument of competition, rankings, was formed. Now, foreign and Russian universities are engaged in the competitive pursuit of a worthy place in the most popular ranking systems. However, experience has revealed at least two fundamental mistakes being committed by the country’s institutions of higher learning in attempting to make their way into the top lists of global rankings of universities (GRU). Let us look at them in more detail.

First, Russia’s colleges and universities would like to take part in global (international) competition, in which they have practically no chances for success at present. This is due to a number of circumstances. Traditionally, Russia had a functioning system of specialized institutes rather than classical universities. Given such a high degree of specialization, the country found it difficult to compete with the highly diversified universities of the West. Essentially, Russia’s higher educational institutions took shape as purely educational structures rather than institutions engaged in science and education, as in the United States and Great Britain, for example. Given the traditional lack of demand for research by college instructors, or incentives to conduct research, it is quite difficult to compete in the world research market. Additionally, competing on someone else’s playing field and by someone else’s rules (criteria) places Russia’s educational institutions in a clearly unfavorable position. How are we, for example, to compete with American institutions of higher learning that are the legislators of scientific fashion, are operating in their own language environment, and have hidden advantages such as endowments in the billions?

Second, even on the national level Russia’s higher educational institutions are oriented mostly toward competition between universities as such, whereas it is much more important to compete in specific areas that are offered in an institution. The replacement of professional competition in specific areas of science with competition between whole universities leads to the distortion of the entire system of ratings and stimuli. As a rule, the higher educational institutions that come out the winners in the competition are the biggest and richest ones, which have a major administrative resource in the person of its administrator. In such a situation, it is more appropriate to be oriented toward rankings of university faculties (RUF). And yet, it also does not make sense to compete in particular scientific areas on the global scale. It has become clear that it is simply not possible to have fair international competition for a number of scientific disciplines. For example, physicists and mathematicians in the different countries, because they deal with the same, single object of research, are well able to compare themselves with one another; economists, sociologists, historians, and philologists, however, whose objects of research are different, are not able to do so. For these specialties, the importance of the national specifics of the research is too great to enable them to be compared on the basis of the same criteria. Hence, Russia’s institutions of higher learning are clearly in a losing position in the world education market.

This leads to the recognition that the Russian system of higher educational institutions is in need of ranking and rating, but these procedures have to be substantially adapted to the existing specifics of the education market in Russia. In our opinion, two simple principles have to be applied. First: it is necessary to use ranking of faculties rather than rankings covering whole universities. To a large extent this will make it possible to keep the rating of a university’s importance in terms of a specific scientific area separate from the strength of the university’s name and reputation. The second principle: it is necessary to use national rather than global rankings of faculties. There is no good reason to think that this country’s higher educational institutions are able to rise to advanced positions in global RUFs. This is especially characteristic of the social sciences and the humanities. In the case of the social sciences, they have to be dealt with by a special method. This is reflected directly, for example, in the adaptable weights of the Quacquarelli Symonds Company (QS), the British developer of university rankings [1]. The essence of the system is that different scientific disciplines are rated in accordance with three groups of factors, but the weight of these groups of factors is not the same for the different sciences.

These principles are acknowledged implicitly by many experts; work has been ongoing for quite a long time in Russia to rate specific educational programs and college and university faculties. Experience has already been accumulated in this field, and needs to be examined critically in future formulations of RUF. Here and henceforth we examine faculties, specialties, and programs of higher educational institutions as synonyms. We use this approach because the methods of rating their qualities form a separate area, in contrast to methods of rating the quality of universities.

The BEPIR Project

The BEPIR Project [Best Educational Programs for an Innovative Russia] is one of the most powerful and interesting projects for rating the quality of professional education in Russia’s institutions of higher learning. The project began in 2010, and three thematic handbooks have already been prepared, for 2010 through 2012 [2]. The organizers of the project are three closely linked structures: the Publishing House of the journal Akkreditatsiia v obrazovanii [Accreditation in Education]; the Guild of Experts in the Sphere of Professional Education; and the National Center for Public–Professional Accreditation.

The first to appear historically was Akkreditatsiia v obrazovanii (2005), which began systematically to bring together all information on the quality of higher education in Russia [3]. That publication came into being in response to the launching of the Education National Project, the competition between innovative programs among higher educational institutions, the rise and development of the first federal universities, the adoption of the multilevel system of specialist training, a new generation of educational standards, and so on, measures that in one way or another were linked to the accreditation of colleges and universities.

The Guild of Experts in the Sphere of Professional Education was created in 2006 by a Constituent Assembly of certified experts in rating the quality of professional education, at which a leadership council was elected consisting of eleven people and the president of the Guild. The need to create the Guild of Experts came about in connection with Russia’s entry into the all-European educational space and the necessity of forming a civic–state partnership in the system of accreditation, the broad recruitment of the academic community in efforts to solve problems of improving the technology of independent consulting, auditing, and expert appraisal of the quality of the educational programs of colleges and universities. At the present time, the Guild of Experts consists of seven to eight members, leading representatives of the academic community in practically all regions of Russia. About 600 of these are rectors, vice rectors, directors, and deputy directors of university branches [4].

The third participant in the BEPIR Project, the National Center for Public–Professional Accreditation (NCPPA), was credited in 2009. It is an autonomous, noncommercial organization founded by the Guild of Experts in the Sphere of Professional Education and other legal entities for the purpose of organizing and conducting the accreditation of educational organizations. The mission of the Center is to form and develop a culture of quality of higher professional education by detecting, rating, and accrediting the best educational programs in accordance with Russian laws and European standards [5]. NCPPA is working actively on the international level, and is a full member of the Network of Agencies of Quality Guarantee in the Higher Education of Countries of Central and Eastern Europe, as well as the Asia–Pacific Network of Quality Guarantee. The NCPPA has associate status under the European Association of Quality Guarantee in Higher Education, as well as associate membership in the International Network of Agencies of Quality Guarantee in Higher Education.

The authors of the BEPIR Project emphasize that their method is not that of accreditation as such, nor is it a ranking list or a sociological study. The Project is based on the technology of benchmarking—the search for and detection of the best experience in f higher professional education. The Project has dealt with two tasks. The first was to single out programs that enjoyed the trust of the academic and professional community; the second was to determine the problem of method and draw the attention of specialists and the public to the problem [6]. The scope of the work of the Project participants is impressive: it has rated about 30,000 educational programs in the higher educational institutions of Russia, of which 2,330 programs were rated as best.

The BEPIR Project is implemented via a comprehensive Internet survey involving the “producers” and “consumers” of educational services. The producers were subdivided into five groups: educational methodology associations (EMAs), with fifty-four associations on the federal level; leading scientists who possess information about the quality of education in the higher educational institutions of Russia; federal experts in higher education that have been certified by Rosobrnadzor [Federal Education and Science Oversight Agency]; noncertified experts; and rectors of the colleges and universities of Russia. The consumers were subdivided into the following groups: the Russian Union of Industrialists and Entrepreneurs; the Russian Federation Chamber of Trade and Industry (CTI) and territorial CTIs (seventy-six organizations); the Russian Union of Young People, and its regional chapters (seventyfive organizations); the Russian Union of Young Scientists, and its regional chapters (forty-five organizations); territorial associations of organizations of trade unions (seventy-nine associations); territorial bodies of Rostruda [Federal Labor and Employment Service] (eighty-eight divisions); ministers of education of entities of the Russian Federation (eighty-eight ministries). The selected respondents were given individual passwords to gain entry into the system for voting on a specialized website and to take part in an online Internet survey [7]. In the course of the voting the respondents were permitted, in the territorial or the profile sample, to note not more than 10 percent of educational programs as the most worthy ones. After the results were processed, a draft list of the best programs was put together and later subjected to discussion and verification in focus groups. In 2010, 4,078 experts took part in the Internet survey of the BEPIR Project.

Although the described algorithm for the screening of the best educational programs in the BEPIR Project looks quite logical and sound, the results of this have prompted a lot of questions. For example, in compiling the list of the best institutions of higher learning, broken down by specific specialties, it does not include higher educational institutions that have ended up in a general, abstract list of “Institutions of Higher Professional Education Implementing the Best Educational Programs.” As a result, the sectoral (faculty, program) lists turn out to be incomplete and, in fact, are not suitable to work with. In addition, a number of the sectoral headings are cause for bewilderment. For example, the section “Natural Sciences” in 2011 includes only two institutions for biology, two institutions for chemistry, and one institution for geography. But the section “Social Sciences” includes one higher educational institution for sociology and one for social work. In the latter case, the name of the specialty itself, “Social Work,” makes it susceptible to rejection: it is at once both antiscientific and not aligned with the meaning of the specialty, hich involves management in the social sphere. No less surprising is the heading “Agriculture and the Fishing Industry,” which two institutions wound up in, training specialists in the mechanization of agriculture [8].

On the whole, careful study of the BEPIR reference work gives the impression that higher education in Russia is in sad shape, aneven breaking it down by examined specialties and programs is problematic. In our opinion, the main problem of such a perception consists of the inept classification of educational programs in the BEPIR Project. It is not to be ruled out that the procedure of the survey in the project itself contains systemic flaws, and this is what produces the unsuitable results. In all likelihood we should proceed along the lines of the traditional classification of the sciences and consolidated specialties.

However, the BEPIR’s chief shortcoming, we think, is that it does not have a ranking of the programs being rated. If, for example, only two higher educational institutions in the country have goodquality programs in biology, what does that mean? From what set were these two institutions selected? And if a lot of institutions have programs of that profile, how do they correlate in terms of quality? To what extent is the program in one institution better than a counterpart program in a different higher educational institution? And if in these two institutions the programs are better than in other institutions, exactly how good are they in principle? The BEPIR Project does not answer these questions.

It is reasonable to assert that the BEPIR Project offers a kind of semifinished information product as the result, and fails to enable an understanding of the disposition of educational programs in Russia’s higher education market. In our opinion, BEPIR contains enough data to take the next research step in regard to a quantitative assessment and comparison of all programs; so far, however, the BEPIR resource has clearly not been fully utilized.

The RME Project

One other interesting project in the rating of a specific kind of education now is a study by the ReitOR Independent Ranking Agency (IRA), which compiled a list of rankings of management education (RME) for Russia’s higher educational institutions in 2006 [9]. The methodology of that list is not unique, and in many ways it is reminiscent of the methodology of the French Professional Ranking of World Universities (PRWU) developed a year later, in 2007, by the Ecole Nationale Superieure des Mines de Paris. As the main criterion for rating an institution of higher learning, the PRWU used data on the number of the Paris school’s graduates who are among the general directors of major companies represented in the annual list of Fortune Global 500 [10]; the RME also used the “return on career” criterion of management education, but on the basis of data on the numbers of graduates of institutions of higher learning who are members of the management elite of the country.

At the same time, the developers of the RME looked at the elite of Russia’s state administration; for the purpose, the entire structure of state administration was broken down into a number of blocks: presidential; governmental; legislative; judicial; control; and gubernatorial. The total number of representatives of the management elite came to 1,152 people. The presidential block included: a president; a director in a president’s administration (PA) and his deputies; a president’s aide; a president’s advisors; a president’s plenipotentiary representatives in federal districts; the leadership of the administration of affairs of the president of the Russian Federation; the leadership of the Security Council; and directors of the main administrations of the PA. The total number of members of that block is fifty people. The governmental block includes: the chairperson and deputy chairpeople of the government of the Russian Federation; ministers and their deputies; directors of federal services and their deputies; directors of federal agencies and their deputies; the director of the government apparatus and deputies; and directors of departments of the government apparatus. The total number of representatives in this block is 320. The legislative block includes: members of the Council of the Federation of the Federal Assembly of the Russian Federation; and deputies to the State Duma of the Federal Assembly of the Russian Federation. The total number of representatives in the block is 617. The judicial block includes: the chairperson and members of the Constitutional Court of the Russian Federation; the chairperson and members of the Supreme Court of the Russian Federation; and the chairperson and members of the Higher Arbitration Court of the Russian Federation. The total number of representatives in the judicial block is forty-five people. The control block includes: the general prosecutor of the Russian Federation, and deputies; the chairperson and the auditors of the Audit Chamber; and the chairperson of the Central Election Commission, and the members of it. The total number of representatives of the control block is thirty-two people. The gubernatorial block includes the heads of administrations of entities of the Russian Federation (presidents of the national republics, governors, and so on). The total number of representatives of the gubernatorial block is eighty-eight people. For these people, data were collected concerning the higher educational institutions they had attended (where they acquired their first and subsequent levels of higher education).

The RME represents two separate rankings of higher educational institutions, one for the first higher education and one for the “second or higher” higher education. A higher educational institution’s rating represents the number of its graduates working in the higher echelons of state authority. In this regard, the RME is extremely simple and transparent, and it is hardly susceptible to falsification. The first ranking took in seventy-two institutions of higher learning; the list of suppliers of top cadres for the system of state administration included a total of 415 higher educational institutions of Russia and countries of the former Soviet Union. The second ranking included thirty-four institutions; the list of higher educational institutions in the case of a second education included 170 state and nonstate institutions of higher learning of Russia and countries of the former Soviet Union, as well as six Western universities and business schools.

The RME has a number of obvious shortcomings. For example, the ranking of the higher educational institutions was carried out in 2005 and 2006, but the results are not comparable owing to differences in the samples for the two years. In addition to that technical shortcoming we can discern in the RME a misrepresentation of the semantic load of ranking indicators. For example, its formulator believed implicitly that the more graduates of some institution of higher learning are working in top positions in state structures, the better the education in that institution must be. Unfortunately, that logic does not work in Russia. The college and university graduates’ “recruitment” into the system of state administration is affected by quite different factors. For example, the Russian Academy of State Service under the president of the Russian Federation turned out to be the leader in the case of a second higher education. This was due to the Academy’s special status and the fact that almost all student streams for the retraining and upgrading of qualifications of established officeholders were channeled through it. The rankings leader in the case of a first higher education was Moscow M.V.Lomonosov State University, which got its first-place ranking thanks to its size, its lengthy history, and its good connections with various employers, including state service.

Nonetheless, the RME can be considered a progressive initiative and might have been fully worthy of being continued, at least for the purpose of rating the management specialties of higher educational institutions. However, this did not happen, because the ReitOR ceased operations [11].

The CTR Project

The ReitOR IRA, while still in operation, left a definite stamp on the compilation of rankings of higher educational institutions in Russia. In particular, in 2009 it carried out an original survey to compile a list of ranking of the cost of tuition (CTR) in the higher educational institutions of Moscow [12]. The ReitOR IRA surveyed eighty-nine institutions in Moscow: seventy-one state institutions and eighteen nonstate institutions. Figuring in the ranking tables were the minimum and maximum tuition costs within generalized areas of training; within these areas, institutions were ranked in accordance with the maximum cost of the education. As the developer noted, that approach made it possible to interpret the data obtained as a kind of antiranking of the institutions. The basis of this interpretation was the idea that a higher educational institution’s competition in the education market ought to lead to a rise in the flexibility of price formation, and, all other conditions being equal, to a lowering of costs.

Let us note that the perception of the CTR as a social antiranking of higher educational institutions is not the only possible view. A different interpretation can be seen in the perception of the CTR as a ranking of education quality, since the cost takes account of the factor of quality and the customer’s assessment of its value. In our opinion, that interpretation of the CTR is more informative and meaningful.

An important characteristic of the CTR is that it is graded in accordance with the consolidated specialties that used to represent a “bundle” of specific specialties and areas of college students’ training. In this sense the CTR represents an aggregate set of rankings for thirteen consolidated specialties: economics and management (forty-four specialties); law (four); public relations, journalism, and the social sciences (fifteen); advertising and design (four); education (thirty-nine); the agroindustrial complex, biotechnologies and the production of food products (nineteen); medicine and bio-engineering (thirteen); services and travel (six); construction and architecture (twelve); the energy industry (nine); machine building, equipment, and technologies (seventy-three); information and communication technologies and electronics (seventy-seven); and aviation and rocket engineering (fourteen).

A shortcoming that the CTR used to have was its narrow geographical sample, confined to Moscow. In addition, owing to the specific character of the data used, the ReitOR IRA did not have enough administrative resources to enable it to obtain all the necessary information. In connection with this, a number of higher educational institutions categorically refused to provide the information that the agency requested. Because of this, the sample of higher educational institutions, even in Moscow, was quite limited. And obtaining data of this kind on the scale of the whole country is only possible with the active support of the Russian Federation Ministry of Education and Science, and, in a number of cases, support from the government of the Russian Federation.

The NRULF Project and its characteristics

One other list of rankings that is significant for Russia is formulated by the Interfax International Information Group (IIG) and Ekho Moskvy Radio, which have been conducting a joint project since 2009 to create a national ranking of universities (NRU) [13]. In October 2009, for example, as a result of competitive procedures the Inform-Invest ZAO [closed, joint-stock corporation], an affiliate structure of the Interfax IIG, signed a state contract with the Russian Federation Federal Education Agency to draw up the principles of an independent system for the rating and the compilation of rankings of Russia’s higher educational institutions. By 2009, rankings had been prepared for two categories: “Classical Universities” and “Law Schools and Faculties” (fifty higher educational institutions were represented in each category). Starting in 2010, Interfax and Ekho Moskvy made a decision to continue to work on the development of a methodology for the independent rating of higher educational institutions and to put together an annual national rankings list of the universities of Russia, using Interfax IIG’s own funds.

What is of greatest interest in the light of our topic is the National Ranking of University Law Faculties (NRULF); the method by which it is compiled is based only on the data from sociological and expert surveys. An overall rating on the rankings list is formed by summing up the weighted average particular ratings that are normed at the maximum, and this [the overall rating] in turn is normed at the maximum and reduced to a 100-point scale.

The structure of the survey looks like this. Four groups of experts and respondents are surveyed. The first group consists of the academic community as represented by the rectors of law schools and the deans of university law faculties. The rating of the level of the organization of the educational, research, and socializer [sotsializatorskie] processes is done on a 5-point scale. The developer presumed that these experts have competence in the subject of study and are the best informed about the situation in the institutions of their colleagues. In this group of respondents, an online survey was conducted. The second group consisted of employers who hire law school graduates. The third group consisted of regional representatives of the Russian Association of Attorneys, who determined the best institutions of higher learning that are training lawyers. The fourth group consisted of law-school graduates who post their opinions about the institutions on the forums of the websites of law schools as well as in specialized law blogs. In the integrated rating of an institution of higher learning the assessment by the academic community prevails, since that community is best informed about the quality of the organization of all the processes in the university environment. In addition, a majority of the directors of the leading law schools (faculties) are members of leadership bodies of the Russian Association of Attorneys, as a result of which the academic rating becomes even more significant.

Just the attempt itself to compile a rankings list of the country’s law faculties deserves praise, but the list that results has a number of serious shortcomings. Technical shortcomings include the following. First, the small number of institutions in the sample: only the fifty best-known law schools were selected, and enough data for compiling a list were obtained only for twenty-five higher educational institutions. Such a limited list is clearly not a complete one. Second, a small sample of experts were surveyed: there were about 200 graduates’ ratings, and moreover the response came to only 40 percent; there were 50 academic experts and almost 150 directors of enterprises and cadre offices. Even though the structure of the sample is fully balanced, it cannot be seen as representative. In our opinion, however, the chief shortcoming has to be one of content, linked to the purely subjective (expert) rating of the quality of the education. On the list of rankings there is, essentially, not a single indicator to represent the objective results of the educational process. In that form, the NRULF belongs to the category of what are called rankings by reputation, which at the present time do not so much rate the actual quality of the education as, rather, the power of the educational brand name of the institutions. In addition, there are serious objections to the methodology of the rankings of reputations in the framework of the increasing cognitive distance between the expert and the institutions being rated [14].

In 2010, Interfax and Ekho Moskvy prepared a rankings list of pedagogical, humanities, and language schools, but this list occupies an intermediate position between the rankings of higher educational institutions and the rankings of faculties or specialties; it is closer to the former than the latter. The method of this ranking is extraordinarily overloaded by a number of different indicators, without, however, yielding an understanding of the correlation of the institutions in regard to specific specialties. In our opinion, the developer chose an inept structure of aggregation with respect to the areas, which turned out to be too consolidated.

In 2011, Interfax and Ekho Moskvy prepared a rankings list of transportation schools, but in all respects this list has to be considered unsuccessful. Rather, it can only be considered as the start of a lot of work in this area.

And so, in the framework of the RUF at the present time it is possible to speak of the experience of Interfax and Ekho Moskvy only with respect to the preparation of the NRULF; the other attempts are at the embryonic stage.

The SRFEA Project

From 2001 through 2006, the Federal Education Agency compiled its own rankings lists of higher educational institutions on the basis of the collection of statistical information. A list was compiled on the basis of an annual order from the Russian Federation Ministry of Education and Science, with the title “On the Ranking of Higher Educational Institutions,” which also provided the actual method by which to calculate all the parameters of the higher educational institutions [15]. Starting in 2007 the compilation of the rankings list was altered, and replaced by a longitudinal study of the activity of educational institutions engaged in higher professional education. Nonetheless, in 2008 a series of specialized rankings of the Federal Educational Agency (SRFEA) was prepared, subdivided into faculties and specialties: rankings of Russia’s agricultural, architect and arts, economic, law, medical, pedagogical, and technical institutions of higher learning, as well as institutions specializing in services and those specializing in physical culture, sports, and travel [16]. Later, the work of ranking the institutions was terminated.

A method adopted in 2012 by the Ministry of Education and Science involves the norming and bundling of a large number of parameters that are grouped into several large sections: the effectiveness of science and innovation activity; the effectiveness of cadre training; the intellectual potential of the institution of higher learning; the institution’s material and information base; the level of provision of dormitories, food services, preventive clinics, and sports facilities. Overall, the directive entails the preparation of fifty-eight indicators of the institution’s activity [17].

The SRFEA method has some solid advantages, but also some drawbacks that are no less serious. Its merits include the point that the higher educational institutions’ activity is rated on a large number of indicators that essentially provide a full idea about their activity. All the indicators are transparent, understandable, and functionally defined. Comparing the institutions on the basis of that set of indicators provides a completely objective system by which to rank them. At the same time, according to the tradition of the Russian regulatory authority, in all data on the higher educational institutions the regulator is meticulous in taking account of their resource potential. That approach is in conflict with world practices and leads to an unjustifiably overstated role of the resource support of the educational process. Strictly speaking, for example, the extent to which students are provided with accommodations in dormitories has no relation to the quality of education, any more than the number of beds in the institutions’ sanatoriums and preventive clinics does. Also having no direct relation to the quality of education is the size of an institution’s athletic facilities. In today’s world even the number of computers and the size of library holdings are of no importance: most people have their own personal computers, and practically no one goes into the library to get hard copies of books; they prefer digital versions.

In addition, the SRFEA hardly takes any account of the institutions’ faculty specifics as such, since this entails emphasizing specifically the results of the institution in the area being studied (the faculty, the specialty). Essentially the Federal Education Agency is merely ranking the same indicators for different groups of universities. Evidently this circumstance accounts for the extremely small set of higher educational institutions that are included in the SRFEA. For example, only 45 institutions are included in rankings of schools of agriculture, 32 in rankings of schools of economics, 38 in rankings of medical schools, 74 in rankings of pedagogical institutions, 157 in rankings of engineering schools, 12 in rankings of schools of physical culture, sports, and travel, 8 in rankings of schools of law, 7 in rankings of schools of architecture and art, and 7 in rankings of schools of service. As a result, essentially, it is not the faculties and specialties of higher educational institutions that are being rated but, rather, the institutions themselves that have some provisional specialization.

This leads to glaring absurdities in the lists of rankings. For example, on the ratings list of schools of economics the Higher School of Economics, which has a legitimate claim to first-place ranking in the country, ended up only in fifth place, while the State University of Administration, which positions itself not as a school of economics but as a school of management, ended up in third place. Moreover, Moscow M.V.Lomonosov State University and St. Petersburg State University did not end up in the rankings at all. This latter circumstance, evidently, is linked to the fact that Moscow State University and St. Petersburg State University have special status and are not subordinate to the Russian Federation Ministry of Education and Science. At the opposite pole is the Russian School of Economics, which also has the right to claim a high ranking among Russia’s colleges and universities of economic profile, but it is a private institution and is only partly subordinate to the system of the Russian Federation Ministry of Education and Science. As a result, in its compilation of the SRFEA the Federal Education Agency simply ignores institutions that are not under its direct jurisdiction.

On the whole, the SRFEA provides a unique information base to compare RUFs, but in their initial form they are too heavily weighted in the sense of rating higher educational institutions’ resource potential, and they suffer from being too conservative when it comes to putting together a list of competing institutions of higher learning and the criteria by which they are ranked.

The USE-HIE Ranking

In 2011, the Higher School of Economics (HSE) National Research University and the Novosti Russian Information Agency launched a joint project with the title “Public Control Over Procedures for Admissions to Higher Educational Institutions as a Condition for Ensuring Equal Access to an Education” by order of the Russian Federation Public Chamber, “Longitudinal Monitoring of the Quality of Admissions to the State Institutions of Higher Learning of Russia” [18]. The purpose of the project is to put together a ranking of “the quality of the applicants for admission” to the country’s various institutions of higher learning, and this in turn shows which of the country’s colleges and universities are the most in demand among the best applicants for admission.

The basic idea of the project is to grade the average score of the Unified State Examination (USE) of each institution; this is the parameter that is crucial for ranking the country’s institutions of higher learning. The result of this work is a HSE ranking that has been compiled on the basis of the USE (USE-HSE).

In the general case, the indicators of the quality of the admissions to higher educational institutions are the average and minimum USE scores (calculated per single subject). These indicators are determined on the basis of data on the USE scores of the applicants for admission who have been enrolled in the institutions, through competition, for the first year of study in a bachelor’s degree program (specialty) for budget-funded slots in the regular full-time form of enrollment. The developer does not take account of the USE scores of people who are enrolled on the basis of target recruitment set, based on the results of participation in olympiads by school students, and with other preferences on enrolling taken into account. The average score for a higher educational institution is weighed to take account of the number of students who have been enrolled for each area of training (each specialty) in the institution: the average score for a specialty is multiplied by the number of students who were accepted through competition, and after that the sum of these products for each specialty is divided by the total number of students enrolled through competition. Represented on the ranking is the average weighted average score for the institution as calculated per single USE subject. A minimum USE score for a higher educational institution is calculated in two ways: as the minimum score selected from the minimum scores for all of the institution’s areas (that is, it takes the score of the weakest enrollee); or as the average of the minimum scores for each specialty in the institution [19].

The basic philosophy of the USE-HSE project consists of rating the applicants’ demand for the services of some particular higher educational institution or other. The following logic is implicit: the higher the quality of the education in the institution, the higher the demand for it; the higher the requirements imposed on the applicants, the higher the quality of applicants and their USE scores have to be. And so, the USE-HSE ranking indirectly rates the quality of the institutions’ higher education.

In comparison with the SRFEA, the USE-HSE ranking has numerous advantages. First, it is simple, because it belongs to the category of methods with few parameters; the assumption is that the indicator of quality (the USE score) has already taken into account all factors and aspects of the educational services. In addition, the USE-HSE ranking operates with a very large sample of higher educational institutions. In 2011, for example, the sample included 525 state institutes. In our opinion, however, the chief advantage of this ranking is the possibility of compiling it on the basis of specialties. In 2011, for example, rankings were compiled for 66 areas of training; what is involved, actually, are 66 RUFs. Moreover, the breadth of each of these RUFs compared, for example, to the SRFEA, is different, and for the better. For example, the set of rankings of schools of economics comes to 285 in the USE-HSE ranking, compared to 32 in the SRFEA, 76 compared to 38 in the case of medical schools, 53 compared to 12 in the case of schools of physical culture and sports, and 146 compared to 8 in schools of law [20]. The result is that the USE-HSE rankings on the basis of specialties can be considered full-fledged RUFs.

The USE-HSE ranking is not entirely free of shortcomings. Its chief flaw may be that the rating of the quality of higher education in the country’s institutions of higher learning is seen as the full responsibility of applicants for enrollment, who, owing to a number of circumstances, are not so well qualified that they are able to cope with such a complicated task. As a rule, the academic results of the activity of higher educational institutions—the chief measure of their professional level—are not visible to the community of applicants for enrollment and are not taken into account.

In spite of the shortcomings and limitations of the USE-HSE ranking, at present it represents one of the most viable instruments for rating the quality of higher education.

The USE-F ranking

The USE-HSE ranking became very popular quite quickly, giving rise to a wave of secondary rankings. For example, on the basis of the initial data of the USE-HSE, Forbes magazine proposed its own version of rankings of the strongest universities in Russia (the Top Thirty): the USE-F [21]. The basic indicator of the USE-F is also the average USE score of applicants for enrollment, but in order to eliminate nonrepresentative results, Forbes adopted a number of additional criteria. It eliminated from the sample the higher educational institutions of the enforcement and security departments, medical institutes and universities, and also universities in the North Caucasus Federal District. The following indicators served as additional criteria for the selection of universities included in the ranking: admission on the basis of competition and the number of winners in olympiads has to exceed 50 percent of the total number of students accepted for budget-funded slots (this results in the elimination of higher educational institutions in which the average USE score has been overstated owing to the large number of students accepted on the basis of target recruitment without examinations); the number of students admitted through competition has to exceed 150–200 people (the rating eliminated institutions in which the number of students who were enrolled did not exceed 200).

The actual procedure of the additional calibration of the USEHSE ranking is quite curious, but the USE-F ranking obtained in this way is even more crude than its initial list. The USE-F was confined to a general table of the strongest institutions of higher learning, whereas it would have been more interesting to go even further and “clean up” the USE-HSE rankings in the case of science areas. In that case, the initial USE-HSE base would be preserved, but it would ensure an adjustment of the lists. Moreover, it is necessary to determine whether the higher educational institutions that have been rejected are to be reassigned. In our opinion, in this regard the USE-F ranking can be improved. Forbes magazine has only taken the first timid step in compiling an RUF; it is hoped that it will take the next steps as soon as possible.

Basic conclusions

This survey of approaches to the rating of the quality of programs of higher educational institutions enables us to come to a few conclusions.

First, this country has accumulated quite a vast and diversified experience in rating the quality of education not only in the case of higher educational institutions as a whole but also with respect to specific specialties.

Second, Russian approaches to this kind of rating differs a great deal from the approaches that predominate in the developed countries of the West. The Western methods are oriented primarily toward the academic results of the higher educational institutions; the Russian methods are oriented toward ministerial normatives, resource parameters, and market indicators.

Third, most approaches in this field are extreme: either they are too simple or too complex. The former include the single-parameter USE-HSE, USE-F, CTF, and RME rankings; the latter include the multifactorial BEPIR, the NRULF, and SRFEA. So far, attempts to achieve a balance in complexity when compiling rankings of faculties have not been successful.

Fourth, most rankings initiatives got bogged down either because of inadequate financing or an absence of felt public need for the rankings. The shelf life of most rankings is only a few years. As a result, we observe a kaleidoscope of unsuccessful attempts, and this does not make it possible to trace the dynamic of changes in the market of higher education.

We can propose a simple classification of existing rankings, by two criteria: the characteristics of the complexity and the type of the indicators used, which, somewhat provisionally, are divided into market indicators (demand, cost, reputation, and so on) and normative indicators (not entirely market indicators but more akin to administrative indicators—the number of highly placed office holders, the extent to which normative numerical values are exceeded, and so on) (see Table 1).

Table 1. Classification of Rankings of Programs of the Higher Educational Institutions of Russia
Too simple Too complicated
Market USE-HSE, USE-F, CTR NRULF
Normative RME BEPIR, SRFEA

It makes sense to develop a relatively simple but still multifactorial schema of the RUF, which, without any special effort or outlay of money, can be repeated every year for different disciplines and specialties. At the same time, placing these rankings in the information market should not be the responsibility of state departments and specialized agencies but, rather, the universities, as in the West. At the present time, the universities of Russia have hardly done anything in this process. If, however, the universities that are leaders in their fields will take upon themselves the responsibility for creation of specialized RUFs, that will yield a gain in many respects. First, it will not be any special problem for a major university to compile one ranking in a professional field in which it is closely engaged, and this will relieve the departments and ranking agencies of excessive, large-scale obligations. Second, these rankings will probably enjoy greater trust than rankings from bureaucratic structures of ruling authority and dubious ranking agencies. Third, it will enhance the stability of the compilation of the rankings, helping them to be recognized and turning them into an effective analytical instrument for all interested parties.

Notes

  1. See E.V. Balatskii, “Mirovoi opyt sostavleniia i ispol’zovaniia reitingov universitetskikh fakul’tetov,” Obshchestvo i ekonomika, 2012, no.9.
  2. Ofitsial’nyi sait proekta “Luchshie obrazovatel’nye programmy innovatsionnoi Rossii,” www.best-edu.ru.
  3. Ofitsial’nyi sait zhurnala Akkreditatsiia v obrazovanii, www.akvobr.ru/o_zhurnale.html.
  4. See Ofitsial’nyi sait Gil’dii ekspertov v sfere professional’nogo obrazovaniia, http://expert-nica.ru.
  5. See Ofitsial’nyi sait Natsional’nogo tsentra obshchestvenno-professional’noi akkreditatsii, ncpa.ru.
  6. See Luchshie obrazovatel’nye programmy innovatsionnnoi Rossii: Spravochnik (Moscow, 2011).
  7. See Ofitsial’nyi sait proekta “Luchshie obrazovatel’nye programmy innovatsionnoi Rossii.” www.best-edu.ru.
  8. See Luchshie obrazovatel’nye programmy innovatsionnoi Rossii: Spravochnik (Moscow: 2011).
  9. See Obrazovanie predstavitelei elity gosudarstvennogo upravleniia Rossii. Reiting vuzov—2006/Biznes-obrazovanie v Rossii i za rubezhom, ubo.ru/analysis/?cat=1&pub=1719.
  10. See E.V.Balatskii and N.A.Ekimova, “Mezhdunarodnye reitingi universitetov: praktika sostavleniia i ispol’zovaniia,” Zhurnal Novoi ekonomicheskoi assotsiatsii, 2011, no.9.
  11. At the present time, there are no updates after 2010 on the official website of the ReitOR IRA. See “Konsalting i otsenka reitinga vuzov,” reitor.ru.
  12. See “Reitingi vuzov 2009 © ANO NRA ReitOR,” www.234555.ru/news/2009–08–16–153.
  13. See “Natsional’nyi reiting universitetov,” univer-rating.ru/rating_branch.asp.
  14. See E.V. Balatskii and N.A. Ekimova, “Global’nye reitingi universitetov: problema manipulirovaniia,” Zhurnal Novoi ekonomicheskoi assotsiatsii, 2012, no.1(13).
  15. See Prikaz Minobrazovaniia RF ot 26.02.2001 No.631 “O reitinge vysshikh uchebnykh zavedenii,” www.bestpravo.ru/rossijskoje/vg-pravila/d8k.htm.
  16. See “Reitingi vuzov Rossii,” high-schools.ru/ratings.
  17. See “Vuzy Rossii.” rating.edu.ru.
  18. See “Kachestvo priema na razlichnye napravleniia podgotovki VPO (posrednemu ballu EGE 2011),” www.edu.ru/abitur/act.74/index.php.
  19. See “Kachestvo priema v vuzy—2011: rezul’taty issledovaniia. Metodika rascheta,” vid1.rian.ru/ig/ratings/Metod rascheta.pdf.
  20. See “Kachestvo priema na razlichnye napravleniia podgotovki VPO (posrednemu ballu EGE 2011),” www.edu.ru/abitur/act.74/index.php
  21. See “Samye sil’nye universitety Rossii. Reiting”.
4324
3
Добавить комментарий:
Ваше имя:
Отправить комментарий
Публикации
В статье обсуждаются основные идеи фантастического рассказа американского писателя Роберта Хайнлайна «Год невезения» («The Year of the Jackpot»), опубликованного в 1952 году. В этом рассказе писатель обрисовал интересное и необычное для того времени явление, которое сегодня можно назвать социальным мегациклом. Сущность последнего состоит в наличии внутренней связи между частными циклами разной природы, что рано или поздно приводит к резонансу, когда точки минимума/максимума всех частных циклов синхронизируются в определенный момент времени и вызывают многократное усиление кризисных явлений. Более того, Хайнлайн акцентирует внимание, что к этому моменту у массы людей возникают сомнамбулические состояния сознания, когда их действия теряют признаки рациональности и осознанности. Показано, что за прошедшие 70 лет с момента выхода рассказа в естественных науках идея мегацикла стала нормой: сегодня прослеживаются причинно–следственные связи между астрофизическими процессами и тектоническими мегациклами, которые в свою очередь детерминируют геологические, климатических и биотические ритмы Земли. Одновременно с этим в социальных науках также утвердились понятия технологического мегацикла, цикла накопления капитала, цикла пассионарности, мегациклов социальных революций и т.п. Дается авторское объяснение природы социального мегацикла с позиций теории хаоса (сложности) и неравновесной экономики; подчеркивается роль принципа согласованности в объединении частных циклов в единое явление. Поднимается дискуссия о роли уровня материального благосостояния населения в возникновении синдрома социального аутизма, занимающего центральное место в увеличении амплитуды мегацикла.
В статье рассматривается институт ученых званий в России, который относится к разряду рудиментарных или реликтовых. Для подобных институтов характерно их номинальное оформление (например, регламентированные требования для получения ученого звания, юридическое подтверждение в виде сертификата и символическая ценность) при отсутствии экономического содержания в форме реальных привилегий (льгот, надбавок, должностных возможностей и т.п.). Показано, что такой провал в эффективности указанного института возникает на фоне надувающегося пузыря в отношении численности его обладателей. Раскрывается нежелательность существования рудиментарных институтов с юридической, институциональной, поведенческой, экономической и системной точек зрения. Показана опасность рудиментарного института из–за формирования симулякров и имитационных стратегий в научном сообществе. Предлагается три сценария корректировки института ученых званий: сохранение федеральной системы на основе введения прямых бонусов; сохранение федеральной системы на основе введения косвенных бонусов; ликвидация федеральной системы и введение локальных ученых званий. Рассмотрены достоинства и недостатки каждого сценария.
The article considers the opportunities and limitations of the so-called “People’s capitalism model” (PCM). For this purpose, the authors systematize the historical practice of implementation of PCM in different countries and available empirical assessments of the effectiveness of such initiatives. In addition, the authors undertake a theoretical analysis of PCM features, for which the interests of the company and its employees are modeled. The analysis of the model allowed us to determine the conditions of effectiveness of the people’s capitalism model, based on description which we formulate proposals for the introduction of a new initiative for Russian strategic enterprises in order to ensure Russia’s technological sovereignty.
Яндекс.Метрика



Loading...