header graphic

 

Place Survey 2008 Report: Background Notes

Background to the Place Survey

In Autumn 2008, local authorities across England were required to conduct a postal survey of residents. The 'Place Survey' was designed to capture local peoples' views, experiences and perceptions, to ensure services and solutions reflect local views and preferences. The Place Survey was run between September and December 2008. Each questionnaire should have been completed by any resident aged 18 or over living at an address. In total, 329 local authorities ran the survey. Twenty four county councils did not run the survey - results for these councils are derived from the constituent districts. The survey collected information on 18 national indicators for local government, used to monitor performance in 152 local authorities (county councils, metropolitan district councils, London boroughs and unitary authorities).

The Place Survey is designed primarily for use at the local level. It supplies the data to measure a number of National Indicators. These national indicators will measure how well Governments' priorities are being delivered by local government and local government partnerships over the next three years.

Each individual council was responsible for running the survey in their local area, using a core questionnaire supplied by Communities and Local Government. A copy of the questionnaire and a copy of manual supplied to local authorities is available. The survey will be carried out bi-annually to enable local authorities and partners to track people's changing perceptions, as a way of determining whether interventions made in an area result in the right outcomes for residents.

More detail on the Place Survey is available from the CLG web site.

Information about this interactive tool

National headline results from the Place Survey 2008 were published by the Dept of Communities and Local Government (CLG) on 23rd June 2009 for regions and local authorities in England. This dynamic report is based on these headline results. It is designed to show the results of the Place Survey at a local level and to enable comparisons between local areas.

The only significant data 'addition' for this report was to create a grading system of all areas for each Indicator. This ranges from A ('strongest/best') to E ('weakest/worst'). The grading categories are based on dividing all national authorities (Counties, Districts, Unitaries & Metropolitan Boroughs), that feature across both interactive tools, into equal 20% intervals called 'quintiles'. So, for example, the 20% of areas with the highest % of satisfied respondents for a specific question will be graded A while the 20% of areas with the lowest % of satisfied respondents will be graded E. Where indicators measure the % of dissatisfied respondents, the 20% of areas with the highest values would be graded 'E'. The grading system is at a national level - so an area is graded in respect of all other areas in England.

 

 We had to use two different reporting tools as it is not possible to show districts and counties together on one map. The reason we chose to also include unitaries authorities and met authorities within these tools, alongside the districts and counties, was so that the maps had complete coverage and there were no gaps in the data. Having the ‘districts and unitaries’ and the ‘counties and unitaries’ shown together was also thought to be useful for agencies working across boundaries, e.g. the Police.

It is important to note that the A to E grades are static, so they do not change in relation to any filters that are applied within the tool, for example, if you filter the results by all four star authorities, the map and graph change accordingly, but the A to E grades will still refer to all the authorities within that tool.

 

We have also added a number of filters (click on the filter button) to assist users to sub-set the national data. These include filters based on (1) ONS Group - this is based on the ONS national classification of authorities in 2001 - more information can be found at http://www.statistics.gov.uk/about/methodology_by_theme/area_classification/ (2) CPA score and direction of travel - the latest Audit Commission Comprehensive Performance Assessment results - see CPA scores on the Audit Commission site. Results are reported in a different way for top tier authorities compared with Districts hence the different filters. (3) Nearest Neighbour filters - these are based on the CIPFA Stats model for calculating the top 10 statistical nearest neighbours for any local authority to assist in effective benchmarking - more information about this is available from http://www.cipfastats.net/

Issues around processing and interpretation (Source: CLG Results Report)

Unlike other surveys (e.g. the Citizenship Survey), the Place Survey was not run by a single contractor under a single contract - each local authority was responsible for running its own survey. Ensuring data quality was complicated, given the large number of separate surveys. Quality was assured in a number of ways.

The Place Survey manual detailed eight common standards that needed to be followed when conducting the survey: following the timetable, using the questionnaire template, using the appropriate sampling method, using a correct sampling frame, using a common method of data collection (postal), maximising response rates, achieving a sufficient sample to enable statistically reliable data, and submitting results using templates and tools provided on a dedicated Place Survey website.

The provisional data sent to the Audit Commission were then subjected to initial checks, and weighted, and provisional national indicators results (scores and confidence intervals) were sent to 152 county councils, metropolitan district councils, unitary authorities and London Boroughs.

Communities and Local Government conducted a quality review of the survey, involving an independent academic statistician as well as members of the Government Statistical Service (GSS). The review was based around the principles in the Code of Practice for Official Statistics: http://www.statisticsauthority.gov.uk/assessment/code-of-practice/code-of-practice-for-official-statistics.pdf

On the advice of the review, the provisional data were revised. The revised results reported in this release and the accompanying tables are based on the same underlying data submitted by councils and used to calculate the provisional results. The differences between the provisional results and the revised results arise from (1) capping of the scaled (final) weights to reduce the impact of individual responses to the overall estimates and (2) the application of an inflation factor to the confidence intervals which enabled them to more accurately capture the impact of the survey design and non-response. This inflation factor is based on the weighting and therefore it varies between local authorities.

The review considered the impact of low response rates in some areas. There is no evidence that either specific sections of the population or any particular localities have been systematically underrepresented. Furthermore, as noted above, the inflation factor applied to the confidence intervals following the review has improved the robustness of the results. Nonetheless, where response rates are low (less than 30%) and confidence intervals are wide (outside +/- 3 percentage points) some caution may be necessary when using the results to set performance targets (for example as part of local area agreements), particularly when the target is linked to a financial reward.