Skip to main content

11.1 - Monitoring, evaluation and reporting of the EPBC Act is inadequate

11.1.1 - Requirements for monitoring and reporting under the EPBC Act are inadequate

The EPBC Act includes some requirements for monitoring and reporting on activities and outcomes. However, these do not span the operation of the Act and follow-through is poor. Resourcing constraints mean that the focus is on reporting to meet the bare minimum requirements, rather than monitoring and evaluation driving adaptive improvements over time.

For listed threatened species and ecological communities, requirements for monitoring are limited in scope. Recovery plans for threatened species are required to include details on how progress will be monitored, but there is no requirement to implement monitoring activities and report on whether outcomes are being achieved. This means that efforts to monitor and report are a rare exception, rather than common practice.

Conservation advices for listed threatened species and ecological communities have no detail on monitoring requirements. Most mandated 5-yearly reviews of threat abatement plans are either well behind schedule or haven’t occurred (TSSC 2020).

For developments approved under Part 9 of the EPBC Act, the Environment Minister may attach conditions that require specified environmental monitoring or testing to be carried out and reports to be prepared. This is an administrative decision, rather than a statutory requirement. As highlighted in Chapter 8 and Chapter 9, where offsets form a condition of approval, there is no comprehensive tracking of offsets or assessment to determine if they are achieving the intended outcomes.

Strategic assessments made under Part 10 of the EPBC Act often include provisions for monitoring and evaluation, although this is not a requirement. Approval holders are required to provide reports, but the Department lacks the capacity to follow-up if activities are not conducted. Similarly, bilateral agreements may include provisions for auditing, monitoring and reporting on the operation and effectiveness of all or part of the agreement, but these are not a requirement.

Other parts of the EPBC Act require management plans to be developed and, in some cases, reports against these plans to be prepared. Many requirements and approaches currently fall short of best practice, but there is ongoing effort to improve the quality and consistency of planning and reporting. Inconsistencies, inadequacies and delays exist in the monitoring and evaluating of activities by other regulators subject to agreements under the Act. An example is Regional Forest Agreements discussed in Chapter 6 – the required 5-yearly reviews have often been extensively delayed or not undertaken, which undermines confidence in the agreements (Lacey et al. 2016).

For World Heritage properties and for National Heritage places entirely on Commonwealth land, a management plan is required to be prepared and reviewed every 5 years. Where heritage places are not entirely on Commonwealth land, only ‘best endeavours’ must be made to ensure a plan is in place. Similar requirements are in place for Ramsar wetlands. This makes planning optional, where management responsibility falls largely with a State or Territory.

Solid processes are in place for the monitoring and reporting of World Heritage properties, which is guided and scrutinised by the international World Heritage Committee. Planning and review of National Heritage places is patchier. While some form of plan is in place or being prepared for most areas, a recent 5-yearly review by the Environment Minister did not assess their effectiveness (DoEE 2019a).

The Director of National Parks (DNP) and Boards for jointly managed Commonwealth reserves are required to prepare management plans, which must be updated every 10 years. All reserves have plans in place, but a 2019 Australian National Audit Office report identified shortcomings in their effectiveness and implementation, which the DNP is working to address (ANAO 2019).

All Commonwealth entities are required to report on ecologically sustainable development (ESD) activities and outcomes in their annual reports (section 516A). The intent is to provide a mechanism to ensure the Commonwealth is considering ESD in its operations, but this has been lost over time. The reality is that most Commonwealth entities report on their use of recycled paper or the energy efficiency of buildings, but exclude the environmental impacts of the policies and programs they implement. It is an administrative burden with no real benefit.

The Department is required to report annually on the operation of the EPBC Act. This is currently delivered as part of the Department’s annual report. Despite some recent improvements, in practice this reporting is focused on outputs and activities rather than the outcomes arising from the operation of the Act. The measures used to report publicly on the operation of the Act consolidate performance information across several programs and they change from year to year. This greatly reduces the usefulness of the reporting effort.

11.1.2 - There are significant gaps in current monitoring and evaluation of the EPBC Act and no cohesive framework

The activities to monitor and report on the EPBC Act are patchy and inconsistent. The various monitoring and reporting requirements in the EPBC Act lack a clear overall purpose and intent.

The broad policy areas of the EPBC Act (Chapter 3), combined with the lack of clearly defined outcomes that the Act seeks to achieve (Chapter 1), provide a challenging foundation for monitoring and evaluating the effectiveness of the Act. Furthermore, the Department lacks the systems (Chapter 10) to collect data on its regulatory activities. This makes assessment of where resources are directed, and the efficiency of activities, difficult.

There are gaps in both the intent and coordination of monitoring, evaluation, review and reporting of the operation of the EPBC Act. Each of the parties involved in the implementation of the Act reports annually on their activities, but the extent to which they articulate and report on outcomes related to the Act varies. All take a different approach.

Limited resources, unclear requirements and a lack of commitment and delivery have resulted in sporadic review and reporting of the range of plans that influence on-ground outcomes and engagement activities. Evaluations are largely delivered as static documents – based on data that are inaccessible and once-off analysis that is unrepeatable – and often there is no adaptive management response.

There is no consistent approach to understanding the links between activities and shorter-term outcomes, or to testing the logic and assumptions about whether activities will ultimately lead to the intended outcomes. The absence of a strategic monitoring and evaluation framework means that there are information gaps that hinder effective evaluation, the resources that are dedicated to monitoring are likely to be inefficient, and there is no clear pathway to learn lessons, adapt and improve.

Long-term monitoring is essential to telling the broader story and track overall outcomes for matters of national environmental significance (MNES) in a timely and meaningful way. To date, this effort has been extremely limited and poorly targeted. Project scale monitoring provides an understanding of individual impacts, but this monitoring is insufficient to understand cumulative impacts in the national context. Overall outcomes for MNES are difficult to track and, as a result, it is very difficult to assess whether the Act is performing.

Poor performance is rarely detected in a timely way and there is an over-reliance on changes following decadal reviews based on disparate and patchy information. Improvements to tweak the different management levers and ensure they are operating effectively are not timely, with reform opportunities often reserved for major reviews, such as this one.

To answer the fundamental question of whether the EPBC Act is operating effectively and efficiently, this Review has relied on diverse, disparate and, at times, patchy sources of information as well as the knowledge of contributors. The lack of a data-driven, contemporary and authoritative source of information to assess the performance of the Act provides clear evidence of a failure in monitoring, evaluation and reporting. In modern public policy, this is unacceptable.