National Drug Strategy Household Survey 2016 – Data Quality Statement
Data Quality Statement Attributes
Identifying and definitional attributes | |
Metadata item type: | Data Quality Statement |
---|---|
Synonymous names: | Data Quality Statement: 2016 National Drug Strategy Household Survey |
METEOR identifier: | 682686 |
Registration status: | AIHW Data Quality Statements, Superseded 16/07/2020 |
Data quality | |
Data quality statement summary: |
|
---|---|
Institutional environment: | The Australian Institute of Health and Welfare (AIHW) is a major national agency set up by the Australian Government under the Australian Institute of Health and Welfare Act 1987(Cwlth) to provide reliable, regular and relevant information and statistics on Australia's health and welfare. It is an independent statutory authority established in 1987, governed by a management Board, and accountable to the Australian Parliament through the Health and Ageing portfolio. The AIHW aims to improve the health and wellbeing of Australians through better health and welfare information and statistics. It collects and reports information on a wide range of topics and issues, ranging from health and welfare expenditure, hospitals, disease and injury, and mental health, to ageing, homelessness, disability and child protection. The Institute also plays a role in developing and maintaining national metadata standards. This work contributes to improving the quality and consistency of national health and welfare statistics. The Institute works closely with governments and non-government organisations to achieve greater adherence to these standards in administrative data collections to promote national consistency and comparability of data and reporting. One of the main functions of the AIHW is to work with the states and territories to improve the quality of administrative data and, where possible, to compile national datasets based on data from each jurisdiction, to analyse these datasets and disseminate information and statistics. The Australian Institute of Health and Welfare Act 1987 (Cwlth), in conjunction with compliance to the Privacy Act 1988 (Cwlth), ensures that the data collections managed by the AIHW are kept securely and under the strictest conditions with respect to privacy and confidentiality. For further information see the AIHW website www.aihw.gov.au The AIHW has analysed and reported data from the National Drug Strategy Household Survey (NDSHS) since 1998. In addition, the AIHW has managed the survey process since 2001. Roy Morgan Research Roy Morgan Research (RMR) is an Australian market research company founded in 1941. RMR has experience in conducting all forms of research, particularly public opinion polling, attitude studies, social surveys, and large consumer and industrial market surveys. RMR take pride in maintaining comprehensive in-house production facilities and maintaining the highest quality assurance standards (to AS/NZS/ISO 9001 and ISO 20252 standard for all business processes) in the industry. RMR adheres to the standards set out in the Code of Professional Behaviour of the Australia Market and Social Research Society of Australia. All RMR staff are familiar with, and adhere to the Information Privacy Principles under the Privacy Act 1988 (Cwlth). As in previous waves, all personnel involved in the 2016 NDSHS project, including interviewers, signed an AIHW Confidentiality Undertaking. Further details about RMR are available at: http://www.roymorgan.com/about/about-roy-morgan-research/history The AIHW has commissioned RMR to undertake at least part of the data collection since 1998. |
Timeliness: | The NDSHS is conducted approximately every three years over a three to four month period. 2016 data were collected from 18 June to 29 November 2016. A preliminary data set was received by the AIHW in late-January 2017 and initial data checks were completed in early February 2017. Key findings from the 2016 NDSHS were released on 1 June 2017. |
Accessibility: | Results from the 2016 NDSHS are available on the AIHW website. Key findings can be found in the web compendium: National Drug Strategy Household Survey (NDSHS) 2016 - key findings and full published results can be found at http://www.aihw.gov.au/reports/illicit-use-of-drugs/ndshs-2016-detailed/ Users can request data not available online by submitting a data request through the AIHW custom data request service. Requests that take longer than half an hour to compile are charged for on a cost-recovery basis. A confidentialised unit record file is will be available for 3rd party analysis through the Australian Data Archive. Access to the master unit record file may be requested through the AIHW Ethics Committee. |
Interpretability: | Information to aid in interpretation of 2016 NDSHS results may be found in Chapter 10 of the 2016 NDSHS report titled ‘Explanatory notes’. In addition, the 2016 Technical Report, code book and other supporting documentation will be available through the Australian Data Archive website or may be requested from AIHW. |
Relevance: | Scope and coverage The NDSHS collects self-reported information on tobacco, alcohol and illicit drug use and attitudes from persons aged 12 years and over. Excluded from sampling were non-private dwellings (hotels, motels, boarding houses, etc.) and institutional settings (hospitals, nursing homes, other clinical settings such as drug and alcohol rehabilitation centres, prisons, military establishments and university halls of residence). Homeless persons were also excluded as well as the territories of Jervis Bay, Christmas Island and Cocos Island. The exclusion of people from non-private dwellings and institutional settings, and the difficulty in reaching marginalised people are likely to have affected estimates. The 2016 NDSHS was designed to provide reliable estimates at the national level. The survey was not specifically designed to obtain reliable national estimates for Aboriginal and Torres Strait Islander people. In 2016, the sample size for Indigenous Australians aged 12 years and older was similar to the population estimates (2.4% compared with 2.5%), but is based on a sample size of 568 people so results should be interpreted with caution. The survey is not translated into any other languages and requires high levels of English literacy and the ability to follow skip patterns. Reference period The fieldwork was conducted from 18 June to 29 November 2016. Respondents to the survey were asked questions relating to their behaviours, beliefs and experiences covering differing time periods, predominantly over the previous 12 months. Geographic detail In 2016, data were coded to the statistical area level 1 (SA1). Data are generally published at the national level with a selection of data published at the State/Territory, Remoteness Area and Primary Health Network levels. Statistical standards Data on tobacco and alcohol consumption were collected in accordance with World Health Organization standards and alcohol risk data were reported in accordance with the current 2009 National Health and Medical Research Council ‘Australian Guidelines to Reduce Health Risks from Drinking Alcohol’. Australian and New Zealand Standard Classification of Occupations (ANZSCO) and Australian and New Zealand Standard Industrial Classification (ANZSIC) codes were used as the code-frame for questions relating to occupation and industry. The Standard Australian Classification of Countries (SACC) codes were used as the code-frame for the question relating to country of birth. Type of estimates available Weighted estimates of drug use prevalence, attitudes and beliefs are most commonly reported. In addition, some population numbers and age-standardised data are available for some aspects of the collection. Time series data are presented for most estimates in the 2016 NDSHS supplementary tables. |
Accuracy: | Sample design The sample was stratified by region (15 strata in total ─ capital city and rest of state for each state and territory, with the exception of the Australian Capital Territory which operated as one stratum). To produce reliable estimates for the smaller states and territories, sample sizes were boosted in Tasmania, the Australian Capital Territory and the Northern Territory. An additional booster sample of 500 completed responses was allocated to South Australia and Victoria. The over-sampling of lesser populated states and territories produced a sample that was not proportional to the state/territory distribution of the Australian population aged 12 years or older. Weighting was applied to adjust for imbalances arising from execution of the sampling and differential response rates, and to ensure that the results relate to the Australian population. Sampling error The measure used to indicate reliability of individual estimates reported in 2016 was the relative standard error (RSE). Only estimates with RSEs of less than 25% are considered sufficiently reliable for most purposes. Results subject to RSEs of between 25% and 50% should be considered with caution and those with relative standard errors greater than 50% should be considered as unreliable for most practical purposes. Estimates with RSE greater than 90% have not been published in the 2016 supplementary tables. Non-sampling error In addition to sampling errors, the estimates are subject to non-sampling errors. These can arise from errors in reporting of responses (for example, failure of respondents’ memories, incorrect completion of the survey form), the unwillingness of respondents to reveal their true responses and the higher levels of non-response from certain subgroups of the population. Programming errors in the online survey Throughout the fieldwork period, interim datasets were checked and a few errors in the filtering and sequencing of the online/telephone survey were discovered. Each of these errors was fixed immediately upon discovery. However, in each instance a proportion of respondents had been not asked these questions where they should have been. The errors included:
Mode effects Selected individuals could choose to complete the survey via a paper form, an online form or via a telephone interview. It is possible that the tool (also known as the ‘mode’) that is used by a respondent could have an impact on the actual information provided, introducing a bias in the data and affecting comparability of data obtained via the different methods. A total of 23,772 completed the 2016 survey. Of these, 18,528 (78%) completed on paper; 5,170 (22%) completed online and 74 (0.3%) completed via a telephone interview. In 2016, respondents who elected to use the online form had different demographic characteristics (such as age and level of education) to respondents who used the paper form. A respondent’s demographic characteristics affect their choice of completing a paper survey or an online survey and are also are known to affect the likelihood of reporting drug use. Therefore these demographic characteristics needed to be taken into account when assessing whether there is a mode effect. Regression analysis, which controls for the known demographics of respondents, was used to test whether there could be a mode effect across the three collection modes used in 2016. After adjusting for socio-demographic factors, significant differences in prevalence rates between the online and papers respondents were found in 4 out of the 9 variables studied. The regression model suggests no significant difference between paper and online collection tools for drinking status; lifetime risk and single occasion risk status; and recent use of methamphetamines and tranquillisers. Estimates for smoking, cocaine, pain-killers/opiates and cannabis may have been impacted by a difference in the mode effect of paper and online forms (online respondents were less likely to be a daily smoker, or use cocaine, pain-killers/opiates or marijuana in the previous 12 months than paper respondents after adjusting for demographic characteristics). This should be taken into account when comparing 2016 estimates with previous survey results. Data validation In an attempt to enhance the reliability of estimates in the survey and maximise data quality, a small number of missing and contradictory responses were imputed through a rigorous menu of cross-validation edit and logic checks. For example, if a respondent failed to indicate a lifetime usage response (missing) or answered ‘no— never used’, but then provided detailed responses to subsequent questions (e.g. used in the last 12 months, how used, where used, source of supply) the missing or contradictory response was recoded as ‘yes’. These logic checks have been applied since 1998. Statistical Linkage Key (SLK) validity For the first time, the 2016 NDSHS included a self-complete Statistical Linkage Key (SLK) at the end of the survey. Approximately 67% of respondents attempted to complete the SLK and about 54% of respondents appear to have fully completed it, equating to 12,758 people. At the time the report was written, no 'cleaning' of the SLK has been undertaken and it's possible that cleaning some of the incomplete SLKs (13%) results in additional completions. The quality of the SLK will impact on any future linkage of these data and does not otherwise affect the quality of other data collected in this survey. Non-response bias Non-response bias can potentially occur when selected respondents cannot or will not participate in the survey, or cannot be contacted during the fieldwork period. The magnitude of any non-response bias depends on the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not, as well as the extent to which non-response adjustments can be made during estimation (ABS 2007). Response rates and contact rates Overall, contact was made with 46,487 in-scope households, of which 23,772 questionnaires were categorised as being complete and useable, representing a response rate for the 2016 survey of 51.1%, which was higher than the response rates for the 2013 and 2010 surveys (49.1% and 50.6%, respectively). A low response rate does not necessarily mean that the results are biased. As long as the non-respondents are not systematically different in terms of how they would have answered the questions, there is no bias. Given the nature of the topics in this survey, some non-response bias is expected. If non-response bias in the NDSHS is to be eliminated as far as possible, there would need to be additional work conducted to investigate the demographic profile of the non-respondents and the answers they may have given had they chosen to respond. Incomplete responses Some survey respondents did not answer all questions, either because they were unable or unwilling to provide a response. The survey responses for these people were retained in the sample, and the missing values were recorded as not answered. No attempt was made to deduce or impute these missing values. Response bias Survey estimates are subject to non-sampling errors that can arise from errors in reporting of responses (for example, failure of respondents’ memories, incorrect completion of the survey form), the unwillingness of respondents to reveal their true responses and higher levels of non-response from certain subgroups of the population. A limitation of the survey is that the data are self-reported and people may not accurately report information relating to illicit drug use and related behaviours because these activities may be illegal. This means that results relating to illicit drugs may be under-reported. However, any biases are likely to be relatively consistent at the population level over time so wouldn’t be expected to have much effect on trend analysis. Legislation protecting people’s privacy and the use of consistent methodology over time means that the impact of this issue on prevalence is limited. However, some behaviours may become less socially acceptable over time which may lead to an increase in socially desirable responses rather than accurate responses. Increases in media reporting stigmatising a drug may increase the tendency to under-report use (Chalmers et al. 2014). Any potential increase in self-reported socially desirable behaviours needs to be considered when interpreting survey results over time. Non-response adjustment The estimation methods used take into account non-response and adjust for any under representation of population subgroups in an effort to reduce non-response bias. The sample was designed to provide a random sample of households within each geographic stratum. Respondents within each stratum were assigned weights to overcome imbalances arising in the design and execution of the sampling. The main weighting took into account geographical stratification, household size, age and sex. Sex In line with the Australian Bureau of Statistics Standard for Sex and Gender Variables, the ‘Other (please specify)’ response was added to the sex question in the 2016 NDSHS. 23 respondents reported their sex as 'other'. These people are included in any ‘persons’ totals presented but are excluded from analysis disaggregated by sex as the data for ‘other’ sex were too unreliable to publish. Indigenous Data The survey was not specifically designed to obtain reliable national estimates for Aboriginal and Torres Strait Islander people. In the 2016 NDSHS, 2.4% of the sample (or approximately 568 respondents) identified as being of Aboriginal and/or Torres Strait Islander origin. The sample size for Indigenous Australians is relatively small and so estimates based on this population group should be interpreted with caution. The total population of Aboriginal and Torres Strait Islander people forms a very small part of the total Australian population. According to the June 2011 estimated resident population figures, there were 699,881 Aboriginal and Torres Strait Islanders, or 3.0% of the total Australian population (ABS 2008b). At that time, about one-third (35%) of the Aboriginal and Torres Strait Islander population lived in Major cities, 22% in Inner regional areas, 22% in Outer regional areas, 8% in Remote areas and 14% in Very remote areas (ABS 2013). The Aboriginal and Torres Strait Islander population living in Very remote areas shows other differences to populations living in Major cities including in household structure, size and age distribution. In terms of remoteness, the 2016 NDSHS sample was more representative of the Indigenous population than previous surveys—32% lived in Major cities, 18% in Inner regional, 22% in Outer regional, 13% in Remote areas and 14% in Very remote areas. Whereas in the 2013 NDSHS, 31% lived in Major cities, 19% in Inner regional, 29% in Outer regional, 11.5% in Remote areas and 8.9% in Very remote areas. The NDSHS uses a self-completion questionnaire, and requires good comprehension of the English language (as it is not translated into other languages) and the ability to follow instructions. Practicality of the survey design meant that some Aboriginal communities and those with low levels of English literacy were excluded. In 2016, six of the 1,764 originally selected SA1s were Aboriginal communities with relatively low levels of English and English literacy and were replaced prior to commencement of fieldwork. The exclusion of these communities makes it difficult to generalise the results in the NDSHS to the whole Indigenous population. |
Coherence: | Surveys in this series commenced in 1985. Over time, modifications have been made to the survey’s methodology and questionnaire design. The 2016 survey differs from previous versions of the survey in some of the questions asked and was also the first time that a multi-mode completion methodology was used. Methodology The 2016 survey used a multi-mode completion methodology—respondents could choose to complete the survey via a paper form, an online form or via a telephone interview. This was the first time an online form has been used. The 2013 and 2010 surveys consisted solely of a self-completion drop-and-collect method. In 2007 and 2004, a combination of computer-assisted telephone interviews (CATI) and drop and collect methods were used, and in earlier waves, personal interviews were also conducted. The change in methodology in 2016 does have some impact on time series data, and users should exercise some degree of caution when comparing data over time (see ‘Mode effects’ in the accuracy section for more information). To improve the geographic coverage of the survey, interviewers have been flown to Very remote areas selected in the sample since 2010. In previous surveys, some Very remote areas that were initially selected in the sample would have been deemed inaccessible and not included in the final sample. Fieldwork was conducted between 18 June and 29 November 2016, similar timing to the previous wave, but the collection period was slightly longer. The collection period coincided with the 2016 Census conducted by the ABS, and fieldwork was put on hold for four days during this period—the day of the Census and for three days afterwards to minimise respondent fatigue. The 9th of August 2016 was Census night in Australia. The vast majority of Census returns were to be completed via an online questionnaire. The Census website crashed that evening, with many households subsequently unable to complete the online Census form for up to two days. The website crash was a major media item for the next month or so and raised privacy and confidentiality issues. There was some concern that the Census website crash could have a significant impact on response and particularly online response for the 2016 NDSHS. As a consequence, weekly monitoring of acceptance and completion rates was undertaken and compared with rates attained pre-Census. The finding was that the Census website crash had no significant impact on the 2016 NDSHS. Comparability with 1998 and early surveys The 1998 survey introduced a number of methodological enhancements that potentially affected comparison with previous survey results. At the time the report was written, it was hypothesised that the cross-validation between lifetime and recent use may have systematically produced marginally higher prevalence estimates than if the 1995 methodology had been used. However, the 1998 NDSHS Technical Advisory Committee considered that the slight loss of comparison with 1995 was more than compensated for by the increase in the reliability of the 1998 estimates. The sample size of the 1998 survey was substantially increased from 3,850 in 1995 to 10,030 in 1998. However, the increase in prevalence estimates reported in 1998 did not continue in subsequent surveys. At the time, it was difficult to identify if the increase in the 1998 prevalence estimates were an anomaly or a real increase in prevalence. It is only the availability of time series data from subsequent surveys that have enabled an assessment of this issues and made it evident that the 1998 prevalence estimates were an anomaly. There are a number of possible causes that may have resulted in a ‘spike’ in prevalence in 1998, including substantial differences in the sampling and weighting methodology and sample frame:
There were 3 samples in 1998: Sample 1—National random selection of households, where a person aged 14 years or over was randomly selected with the next birthday. Data were collected from personal interviews and self-completion booklets for the more sensitive questions. The number of respondents who completed the survey from this sample was 4,012. Sample 2—sampled the same household as in Sample 1. The youngest person aged 14 or older other than the Sample 1 respondent was selected. Data were collected by self-completion booklets. Where a questionnaire was completed subsequent to the Sample 1 interview, one attempt was made to personally collect the questionnaire. If still incomplete, the respondent was provided with a reply paid pre-addressed envelope. The number of respondents who completed the survey from this sample was 1,983. Sample 3—Capital cities only. From a random selection of households, a person aged 14 to 39 years of age was randomly selected with the next birthday. Data were collected by self-completion booklets. Questionnaires were left for completion and interviewers returned 2 days later for their collection. Where a questionnaire was not completed by this time, the respondent was provided with a reply paid pre-addressed envelope. The number of respondents who completed the survey from this sample was 4,035. It is likely that systematic biases were introduced by the split sampling design in 1998. |
Source and reference attributes | |
Submitting organisation: | Tobacco, Alcohol and Other Drugs Unit, Australian Institute of Health and Welfare. |
Steward: | Australian Institute of Health and Welfare |
Relational attributes | |
Related metadata references: | Supersedes National Drug Strategy Household Survey 2013 – Data Quality Statement AIHW Data Quality Statements, Superseded 28/09/2017 |