This report, and the proposed funding approach detailed in it, has been some time coming.
In 2017, the government ended the policy of demand-driven funding to universities, introduced by the previous Labor government in 2012.
Under the policy, the number of government funded Commonwealth-supported places (CSPs) for Australian undergraduates was based on the number of undergraduates enrolling in courses. Roughly speaking, every student (with some exceptions) who enrolled could get a funded place and there was no limit or cap, which is why the policy was also known as “uncapped” funding.
But the cap was put back on in December 2017 when the government froze the number of CSPs at 2017 levels until 2020. It was a perhaps rushed decision in the context of some budgetary considerations during the 2017 Mid Year Economic and Fiscal Outlook (MYEFO).
In December 2018, Tehan announced an expert panel, chaired by the University of Wollongong’s Vice Chancellor Professor Paul Wellings, to lead the consultation with the sector on the implementation of this reform. This week’s report is the result of this consultation.
So, what is performance-based funding and what does the report propose?
How will the new scheme work?
First, it’s important to note the proposed performance-based funding allocation will only apply to “new” undergraduate places, above the 2017 cap level. This “growth” will also be in line with working-age population growth.
This means next year, only about A$80 million of funding will be subject to performance measures. The proposed scheme would grow year-on-year above 2017 levels until a maximum of 7.5% of a university’s funding is allocated on a performance basis.
There is, of course, scope for this proportion to be adjusted in future.
The scheme proposed by the expert panel measures performance across four areas:
- student success, measured by the drop-out (attrition) rate
- equity group participation, measured by participation rates for Indigenous, low socio-economic status, and regional and remote students
- graduate employment outcomes, measured by the overall employment rate of graduates available for work at four months after graduating, and
- student experience, measured by student satisfaction with teaching quality drawn from the Student Experience Survey.
The application of the model is highly technical and includes complex analyses to smooth out areas where factors outside of a university’s control may skew performance data –for example, the impact of economic conditions on graduate employment rates.
Universities and peak bodies have been cautiously welcoming of the proposal. There had been concerns the way performance would be measured would be blunt, and prone to unstable shifts for universities in and out of the ratings.
The inclusion of statistical techniques to attempt to smooth out these kinds of bumps has been widely welcomed. The minister has indicated the scheme will be fine-tuned to iron out unintended outcomes.
Has it been tried elsewhere?
Rewards and incentives to meet performance criteria are commonplace in the public sector – it’s part of a system developed from the 1980s known as New Public Management. This seeks to make public institutions more business-like through centralised management’s use of targets and metrics to drive efficiencies and behaviours.
Performance funding is already present in many of the Commonwealth schemes funding universities. This is particularly the case in research where publication output, successful postgraduate research completions and number of grants won by universities drives further funding allocation.
A number of countries use similar performance models for undergraduate education to monitor and drive university behaviour, although the most notable (in the United Kingdom and New Zealand) do not attach funding to performance.
The big concern for higher education is not, at present, with the scheme itself, but with the total amount of funding available to the sector. This includes the growth funding that the scheme will allocate.
What are the challenges?
Allocations under the scheme will increase the national pool of undergraduate places in line with population growth of the total working-age population (about 1.1-1.4% in the decade to 2030). This growth is far below that of the youth cohort expected to be eligible to enter higher education in the coming decade, projected to peak at around 4.1%.
In practice, this will mean a decrease in university participation for young Australians. This is an abrupt change of policy direction – growing youth participation in higher education has been government policy, regardless of the party in power, for more than 40 years.
There is also scope for unintended negative outcomes, although it remains to be seen how these will be handled. For example, using graduate employment outcomes as a measure seems to put the cart before the horse. Universities can do little to influence the wider employment market.
Fine-tuning is needed here to ensure universities are not incentivised to decrease places available in some areas of science, and in particular mathematics, which have below-average employment outcomes.
Testing of the model will also be needed to understand how the equity group and the attrition measures will interact. Students from socially and educationally disadvantaged backgrounds typically drop out at higher rates than more advantaged students.
Institutions that can attract the most educationally prepared and high-performing students from these backgrounds may be rewarded disproportionately to those who take on less advantaged students, or students experiencing social and financial instability. The latter are more likely to find they need to drop out, but also benefit most from higher education.
The statistical aspects of the model clearly attempt to iron out these issues, but only further testing will tell.
Universities expect to find out more about the technical details of the scheme in the near future. It is an interesting time for policy wonks and statistical nerds – and an interesting time in the evolution of university policy for Australia.
Emmaline Bexley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Authors: Emmaline Bexley, Senior Manager, Higher Education Policy, La Trobe University