Australian Government: Australian Institute of Health and Welfare METeOR Home Page

National Drug Strategy Household Survey 2016 – Data Quality Statement

Identifying and definitional attributes

Metadata item type:Help on this termQuality Statement
Synonymous names:Help on this termData Quality Statement: 2016 National Drug Strategy Household Survey
METeOR identifier:Help on this term682686
Registration status:Help on this termAIHW Data Quality Statements, Endorsed 28/09/2017

Data quality

Quality statement summary:Help on this term
  • Reported findings are based on self-reported data and are not empirically verified by blood tests or other screening measures.
  • It is known from past studies of alcohol and tobacco consumption that respondents tend to underestimate actual consumption levels.
  • Estimates of illicit drug use and related behaviours are also likely to be underestimates of actual use.
  • The exclusion of persons from non-private dwellings, institutional settings, homeless people and the difficulty in reaching marginalised persons are likely to have affected estimates.
  • The response rate for the 2016 survey was 51.1%. Given the nature of the topics in this survey, some non-response bias is expected, but this bias has not been measured.
  • Both sampling and non-sampling errors should be considered when interpreting results.
  • The 2016 survey used a multi-mode completion methodology—respondents could choose to complete the survey via a paper form, an online form or via a telephone interview. This was the first time an online form has been used in the survey series. Changes in mode may have some impact on responses, and users should exercise some degree of caution when comparing data over time
Institutional environment:Help on this term

The Australian Institute of Health and Welfare (AIHW) is a major national agency set up by the Australian Government under the Australian Institute of Health and Welfare Act 1987 (Cwlth) to provide reliable, regular and relevant information and statistics on Australia's health and welfare. It is an independent statutory authority established in 1987, governed by a management Board, and accountable to the Australian Parliament through the Health and Ageing portfolio.

The AIHW aims to improve the health and wellbeing of Australians through better health and welfare information and statistics. It collects and reports information on a wide range of topics and issues, ranging from health and welfare expenditure, hospitals, disease and injury, and mental health, to ageing, homelessness, disability and child protection.

The Institute also plays a role in developing and maintaining national metadata standards. This work contributes to improving the quality and consistency of national health and welfare statistics. The Institute works closely with governments and non-government organisations to achieve greater adherence to these standards in administrative data collections to promote national consistency and comparability of data and reporting.

One of the main functions of the AIHW is to work with the states and territories to improve the quality of administrative data and, where possible, to compile national datasets based on data from each jurisdiction, to analyse these datasets and disseminate information and statistics.

The Australian Institute of Health and Welfare Act 1987 (Cwlth), in conjunction with compliance to the Privacy Act 1988 (Cwlth), ensures that the data collections managed by the AIHW are kept securely and under the strictest conditions with respect to privacy and confidentiality.

For further information see the AIHW website www.aihw.gov.au

The AIHW has analysed and reported data from the National Drug Strategy Household Survey (NDSHS) since 1998. In addition, the AIHW has managed the survey process since 2001.

Roy Morgan Research

Roy Morgan Research (RMR) is an Australian market research company founded in 1941. RMR has experience in conducting all forms of research, particularly public opinion polling, attitude studies, social surveys, and large consumer and industrial market surveys.

RMR take pride in maintaining comprehensive in-house production facilities and maintaining the highest quality assurance standards (to AS/NZS/ISO 9001 and ISO 20252 standard for all business processes) in the industry. RMR adheres to the standards set out in the Code of Professional Behaviour of the Australia Market and Social Research Society of Australia.

All RMR staff are familiar with, and adhere to the Information Privacy Principles under the Privacy Act 1988 (Cwlth). As in previous waves, all personnel involved in the 2016 NDSHS project, including interviewers, signed an AIHW Confidentiality Undertaking.

Further details about RMR are available at:

http://www.roymorgan.com/about/about-roy-morgan-research/history

The AIHW has commissioned RMR to undertake at least part of the data collection since 1998.

Timeliness:Help on this term

The NDSHS is conducted approximately every three years over a three to four month period. 2016 data were collected from 18 June to 29 November 2016.

A preliminary data set was received by the AIHW in late-January 2017 and initial data checks were completed in early February 2017.

Key findings from the 2016 NDSHS were released on 1 June 2017.

Accessibility:Help on this term

Results from the 2016 NDSHS are available on the AIHW website. Key findings can be found in the web compendium: National Drug Strategy Household Survey (NDSHS) 2016 - key findings and full published results can be found at http://www.aihw.gov.au/reports/illicit-use-of-drugs/ndshs-2016-detailed/

Users can request data not available online by submitting a data request through the AIHW custom data request service. Requests that take longer than half an hour to compile are charged for on a cost-recovery basis.

A confidentialised unit record file is will be available for 3rd party analysis through the Australian Data Archive. Access to the master unit record file may be requested through the AIHW Ethics Committee.

Interpretability:Help on this term

Information to aid in interpretation of 2016 NDSHS results may be found in Chapter 10 of the 2016 NDSHS report titled ‘Explanatory notes’.

In addition, the 2016 Technical Report, code book and other supporting documentation will be available through the Australian Data Archive website or may be requested from AIHW.

Relevance:Help on this term

Scope and coverage

The NDSHS collects self-reported information on tobacco, alcohol and illicit drug use and attitudes from persons aged 12 years and over.

Excluded from sampling were non-private dwellings (hotels, motels, boarding houses, etc.) and institutional settings (hospitals, nursing homes, other clinical settings such as drug and alcohol rehabilitation centres, prisons, military establishments and university halls of residence). Homeless persons were also excluded as well as the territories of Jervis Bay, Christmas Island and Cocos Island.

The exclusion of people from non-private dwellings and institutional settings, and the difficulty in reaching marginalised people are likely to have affected estimates.

The 2016 NDSHS was designed to provide reliable estimates at the national level. The survey was not specifically designed to obtain reliable national estimates for Aboriginal and Torres Strait Islander people. In 2016, the sample size for Indigenous Australians aged 12 years and older was similar to the population estimates (2.4% compared with 2.5%), but is based on a sample size of 568 people so results should be interpreted with caution.

The survey is not translated into any other languages and requires high levels of English literacy and the ability to follow skip patterns.

Reference period

The fieldwork was conducted from 18 June to 29 November 2016. Respondents to the survey were asked questions relating to their behaviours, beliefs and experiences covering differing time periods, predominantly over the previous 12 months.

Geographic detail

In 2016, data were coded to the statistical area level 1 (SA1). Data are generally published at the national level with a selection of data published at the State/Territory, Remoteness Area and Primary Health Network levels.

Statistical standards

Data on tobacco and alcohol consumption were collected in accordance with World Health Organization standards and alcohol risk data were reported in accordance with the current 2009 National Health and Medical Research Council ‘Australian Guidelines to Reduce Health Risks from Drinking Alcohol’.

Australian and New Zealand Standard Classification of Occupations (ANZSCO) and Australian and New Zealand Standard Industrial Classification (ANZSIC) codes were used as the code-frame for questions relating to occupation and industry.

The Standard Australian Classification of Countries (SACC) codes were used as the code-frame for the question relating to country of birth.

Type of estimates available

Weighted estimates of drug use prevalence, attitudes and beliefs are most commonly reported. In addition, some population numbers and age-standardised data are available for some aspects of the collection. Time series data are presented for most estimates in the 2016 NDSHS supplementary tables.

Accuracy:Help on this term

Sample design

The sample was stratified by region (15 strata in total ─ capital city and rest of state for each state and territory, with the exception of the Australian Capital Territory which operated as one stratum). To produce reliable estimates for the smaller states and territories, sample sizes were boosted in Tasmania, the Australian Capital Territory and the Northern Territory. An additional booster sample of 500 completed responses was allocated to South Australia and Victoria.

The over-sampling of lesser populated states and territories produced a sample that was not proportional to the state/territory distribution of the Australian population aged 12 years or older. Weighting was applied to adjust for imbalances arising from execution of the sampling and differential response rates, and to ensure that the results relate to the Australian population.

Sampling error

The measure used to indicate reliability of individual estimates reported in 2016 was the relative standard error (RSE). Only estimates with RSEs of less than 25% are considered sufficiently reliable for most purposes. Results subject to RSEs of between 25% and 50% should be considered with caution and those with relative standard errors greater than 50% should be considered as unreliable for most practical purposes.

Estimates with RSE greater than 90% have not been published in the 2016 supplementary tables.

Non-sampling error

In addition to sampling errors, the estimates are subject to non-sampling errors. These can arise from errors in reporting of responses (for example, failure of respondents’ memories, incorrect completion of the survey form), the unwillingness of respondents to reveal their true responses and the higher levels of non-response from certain subgroups of the population.

Programming errors in the online survey

Throughout the fieldwork period, interim datasets were checked and a few errors in the filtering and sequencing of the online/telephone survey were discovered. Each of these errors was fixed immediately upon discovery. However, in each instance a proportion of respondents had been not asked these questions where they should have been. The errors included:

  • Question D23 – Factors that would motivate you to quit smoking. This question was not asked of daily smokers who identified as such when asked if they had ever smoked daily (code 1 on D7). This affected 228 respondents.
  • Questions X2A/X2B – Factors influencing first use of an illicit drug and reasons for illicit drug use. The filter before X2A instructs that all respondents who had ever used an illicit drug should be asked X2A and X2B. However, this question was not asked if a respondent had only used marijuana and not any other illicit drug, and also reported that they purchased their cannabis ‘from a dealer’ at L8A and L8B. This affected 629 online/telephone respondents out of 1,385 online/telephone respondents who qualified to be asked the question. In addition to this, approximately 30% of paper respondents who qualified to answer questions X2A and X2B did not answer them, whether deliberately or because they did not understand the instruction. Across all modes, 7,032 respondents out of a potential 10,092 answered X2A and X2B (70%). The largest component of non-response to these questions was paper respondents who did not follow or did not understand the instructions (see questionnaire for exact wording and instructions).
  • Question Y16 – Did any of the drug-related incidents of physical abuse involve sexual abuse. The instruction for programming Y16 was to ask all respondents who answered yes to any of the statements on Y9 (i.e. In the last 12 months, did a person under the influence of or affected by illicit drugs verbally abuse you, physically abuse you or put you in fear). However, a programming error meant that online/telephone respondents were only asked Y16 if they had answered yes to any of the statements on Y1. This affected 11 respondents.

Mode effects

Selected individuals could choose to complete the survey via a paper form, an online form or via a telephone interview.

It is possible that the tool (also known as the ‘mode’) that is used by a respondent could have an impact on the actual information provided, introducing a bias in the data and affecting comparability of data obtained via the different methods.

A total of 23,772 completed the 2016 survey. Of these, 18,528 (78%) completed on paper; 5,170 (22%) completed online and 74 (0.3%) completed via a telephone interview. In 2016, respondents who elected to use the online form had different demographic characteristics (such as age and level of education) to respondents who used the paper form.

A respondent’s demographic characteristics affect their choice of completing a paper survey or an online survey and are also are known to affect the likelihood of reporting drug use. Therefore these demographic characteristics needed to be taken into account when assessing whether there is a mode effect.

Regression analysis, which controls for the known demographics of respondents, was used to test whether there could be a mode effect across the three collection modes used in 2016. After adjusting for socio-demographic factors, significant differences in prevalence rates between the online and papers respondents were found in 4 out of the 9 variables studied.

The regression model suggests no significant difference between paper and online collection tools for drinking status; lifetime risk and single occasion risk status; and recent use of methamphetamines and tranquillisers.

Estimates for smoking, cocaine, pain-killers/opiates and cannabis may have been impacted by a difference in the mode effect of paper and online forms (online respondents were less likely to be a daily smoker, or use cocaine, pain-killers/opiates or marijuana in the previous 12 months than paper respondents after adjusting for demographic characteristics). This should be taken into account when comparing 2016 estimates with previous survey results.

Data validation

In an attempt to enhance the reliability of estimates in the survey and maximise data quality, a small number of missing and contradictory responses were imputed through a rigorous menu of cross-validation edit and logic checks. For example, if a respondent failed to indicate a lifetime usage response (missing) or answered ‘no— never used’, but then provided detailed responses to subsequent questions (e.g. used in the last 12 months, how used, where used, source of supply) the missing or contradictory response was recoded as ‘yes’. These logic checks have been applied since 1998.

Statistical Linkage Key (SLK) validity

For the first time, the 2016 NDSHS included a self-complete Statistical Linkage Key (SLK) at the end of the survey. Approximately 67% of respondents attempted to complete the SLK and about 54% of respondents appear to have fully completed it, equating to 12,758 people. At the time the report was written, no 'cleaning' of the SLK has been undertaken and it's possible that cleaning some of the incomplete SLKs (13%) results in additional completions.

The quality of the SLK will impact on any future linkage of these data and does not otherwise affect the quality of other data collected in this survey.

Non-response bias

Non-response bias can potentially occur when selected respondents cannot or will not participate in the survey, or cannot be contacted during the fieldwork period. The magnitude of any non-response bias depends on the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not, as well as the extent to which non-response adjustments can be made during estimation (ABS 2007).

Response rates and contact rates

Overall, contact was made with 46,487 in-scope households, of which 23,772 questionnaires were categorised as being complete and useable, representing a response rate for the 2016 survey of 51.1%, which was higher than the response rates for the 2013 and 2010 surveys (49.1% and 50.6%, respectively).

A low response rate does not necessarily mean that the results are biased. As long as the non-respondents are not systematically different in terms of how they would have answered the questions, there is no bias. Given the nature of the topics in this survey, some non-response bias is expected. If non-response bias in the NDSHS is to be eliminated as far as possible, there would need to be additional work conducted to investigate the demographic profile of the non-respondents and the answers they may have given had they chosen to respond.

Incomplete responses

Some survey respondents did not answer all questions, either because they were unable or unwilling to provide a response. The survey responses for these people were retained in the sample, and the missing values were recorded as not answered. No attempt was made to deduce or impute these missing values.

Response bias

Survey estimates are subject to non-sampling errors that can arise from errors in reporting of responses (for example, failure of respondents’ memories, incorrect completion of the survey form), the unwillingness of respondents to reveal their true responses and higher levels of non-response from certain subgroups of the population.

A limitation of the survey is that the data are self-reported and people may not accurately report information relating to illicit drug use and related behaviours because these activities may be illegal. This means that results relating to illicit drugs may be under-reported. However, any biases are likely to be relatively consistent at the population level over time so wouldn’t be expected to have much effect on trend analysis. Legislation protecting people’s privacy and the use of consistent methodology over time means that the impact of this issue on prevalence is limited.

However, some behaviours may become less socially acceptable over time which may lead to an increase in socially desirable responses rather than accurate responses. Increases in media reporting stigmatising a drug may increase the tendency to under-report use (Chalmers et al. 2014). Any potential increase in self-reported socially desirable behaviours needs to be considered when interpreting survey results over time.

Non-response adjustment

The estimation methods used take into account non-response and adjust for any under representation of population subgroups in an effort to reduce non-response bias.

The sample was designed to provide a random sample of households within each geographic stratum. Respondents within each stratum were assigned weights to overcome imbalances arising in the design and execution of the sampling. The main weighting took into account geographical stratification, household size, age and sex.

Sex

In line with the Australian Bureau of Statistics Standard for Sex and Gender Variables, the ‘Other (please specify)’ response was added to the sex question in the 2016 NDSHS. 23 respondents reported their sex as 'other'. These people are included in any ‘persons’ totals presented but are excluded from analysis disaggregated by sex as the data for ‘other’ sex were too unreliable to publish.

Indigenous Data

The survey was not specifically designed to obtain reliable national estimates for Aboriginal and Torres Strait Islander people. In the 2016 NDSHS, 2.4% of the sample (or approximately 568 respondents) identified as being of Aboriginal and/or Torres Strait Islander origin. The sample size for Indigenous Australians is relatively small and so estimates based on this population group should be interpreted with caution.

The total population of Aboriginal and Torres Strait Islander people forms a very small part of the total Australian population. According to the June 2011 estimated resident population figures, there were 699,881 Aboriginal and Torres Strait Islanders, or 3.0% of the total Australian population (ABS 2008b). At that time, about one-third (35%) of the Aboriginal and Torres Strait Islander population lived in Major cities, 22% in Inner regional areas, 22% in Outer regional areas, 8% in Remote areas and 14% in Very remote areas (ABS 2013).

The Aboriginal and Torres Strait Islander population living in Very remote areas shows other differences to populations living in Major cities including in household structure, size and age distribution.

In terms of remoteness, the 2016 NDSHS sample was more representative of the Indigenous population than previous surveys—32% lived in Major cities, 18% in Inner regional, 22% in Outer regional, 13% in Remote areas and 14% in Very remote areas. Whereas in the 2013 NDSHS, 31% lived in Major cities, 19% in Inner regional, 29% in Outer regional, 11.5% in Remote areas and 8.9% in Very remote areas.

The NDSHS uses a self-completion questionnaire, and requires good comprehension of the English language (as it is not translated into other languages) and the ability to follow instructions. Practicality of the survey design meant that some Aboriginal communities and those with low levels of English literacy were excluded. In 2016, six of the 1,764 originally selected SA1s were Aboriginal communities with relatively low levels of English and English literacy and were replaced prior to commencement of fieldwork. The exclusion of these communities makes it difficult to generalise the results in the NDSHS to the whole Indigenous population.

Coherence:Help on this term

Surveys in this series commenced in 1985. Over time, modifications have been made to the survey’s methodology and questionnaire design. The 2016 survey differs from previous versions of the survey in some of the questions asked and was also the first time that a multi-mode completion methodology was used.

Methodology

The 2016 survey used a multi-mode completion methodology—respondents could choose to complete the survey via a paper form, an online form or via a telephone interview. This was the first time an online form has been used. The 2013 and 2010 surveys consisted solely of a self-completion drop-and-collect method. In 2007 and 2004, a combination of computer-assisted telephone interviews (CATI) and drop and collect methods were used, and in earlier waves, personal interviews were also conducted.

The change in methodology in 2016 does have some impact on time series data, and users should exercise some degree of caution when comparing data over time (see ‘Mode effects’ in the accuracy section for more information).

To improve the geographic coverage of the survey, interviewers have been flown to Very remote areas selected in the sample since 2010. In previous surveys, some Very remote areas that were initially selected in the sample would have been deemed inaccessible and not included in the final sample.

Fieldwork was conducted between 18 June and 29 November 2016, similar timing to the previous wave, but the collection period was slightly longer. The collection period coincided with the 2016 Census conducted by the ABS, and fieldwork was put on hold for four days during this period—the day of the Census and for three days afterwards to minimise respondent fatigue.

The 9th of August 2016 was Census night in Australia. The vast majority of Census returns were to be completed via an online questionnaire. The Census website crashed that evening, with many households subsequently unable to complete the online Census form for up to two days. The website crash was a major media item for the next month or so and raised privacy and confidentiality issues. There was some concern that the Census website crash could have a significant impact on response and particularly online response for the 2016 NDSHS. As a consequence, weekly monitoring of acceptance and completion rates was undertaken and compared with rates attained pre-Census. The finding was that the Census website crash had no significant impact on the 2016 NDSHS.

Comparability with 1998 and early surveys

The 1998 survey introduced a number of methodological enhancements that potentially affected comparison with previous survey results. At the time the report was written, it was hypothesised that the cross-validation between lifetime and recent use may have systematically produced marginally higher prevalence estimates than if the 1995 methodology had been used. However, the 1998 NDSHS Technical Advisory Committee considered that the slight loss of comparison with 1995 was more than compensated for by the increase in the reliability of the 1998 estimates. The sample size of the 1998 survey was substantially increased from 3,850 in 1995 to 10,030 in 1998.

However, the increase in prevalence estimates reported in 1998 did not continue in subsequent surveys. At the time, it was difficult to identify if the increase in the 1998 prevalence estimates were an anomaly or a real increase in prevalence. It is only the availability of time series data from subsequent surveys that have enabled an assessment of this issues and made it evident that the 1998 prevalence estimates were an anomaly. There are a number of possible causes that may have resulted in a ‘spike’ in prevalence in 1998, including substantial differences in the sampling and weighting methodology and sample frame:

  • About 40% of the 1998 sample was targeted at young people (aged 14–39 years) from capital cities to obtain more reliable estimates, in particular for illicit drugs. In 2001, the overall sample size was more than double that of 1998 (increased to 26,744 in 2001), eliminating the need for such a sample.
  • In 1998, 20% of the sample were drawn from the same household, whereas for subsequent surveys only one respondent per household was selected.
  • Only 4 age groups were used in the weighting, this was increased to 6 age groups in 2001, and since 2010, 11 age groups have been used.

There were 3 samples in 1998:

Sample 1—National random selection of households, where a person aged 14 years or over was randomly selected with the next birthday. Data were collected from personal interviews and self-completion booklets for the more sensitive questions. The number of respondents who completed the survey from this sample was 4,012.

Sample 2—sampled the same household as in Sample 1. The youngest person aged 14 or older other than the Sample 1 respondent was selected. Data were collected by self-completion booklets. Where a questionnaire was completed subsequent to the Sample 1 interview, one attempt was made to personally collect the questionnaire. If still incomplete, the respondent was provided with a reply paid pre-addressed envelope. The number of respondents who completed the survey from this sample was 1,983.

Sample 3—Capital cities only. From a random selection of households, a person aged 14 to 39 years of age was randomly selected with the next birthday. Data were collected by self-completion booklets. Questionnaires were left for completion and interviewers returned 2 days later for their collection. Where a questionnaire was not completed by this time, the respondent was provided with a reply paid pre-addressed envelope. The number of respondents who completed the survey from this sample was 4,035.

It is likely that systematic biases were introduced by the split sampling design in 1998.

The 1998 questionnaire was substantially shorter in comparison to subsequent surveys and only about 4 questions were asked for each illicit drug which reduced the respondent burden.

While the changes between the 1995 and 1998 survey, as well as the changes incorporated in subsequent surveys as outlined above are likely to have had some impact, it is likely that one of the main reasons for the spike is due to the inclusion of Sample 2 (sampling 2 people from the same household and selecting the youngest person). The probability of selection for people from sample 2 is higher and should have been assigned a lower initial weight but the weighting didn’t adjust for selection probability (as no information on household size was collected) and therefore some of these people were overweighted. Because these respondents were likely to be younger and age is related to drug use, then these people have been overweighted and therefore likely to have overinflated the result.

Comparisons between pre-2001 and 2001 and later data should be avoided where possible given that the methodology and questionnaire design was quite different to subsequent surveys and the sample size was considerably smaller.

Questionnaire

The 2016 questionnaire was modelled on the 2013 version, to maintain maximum comparability. However, some refinements were made to ensure the questions remained relevant and useful. For more information on questionnaire changes in 2016 see Chapter 10 of the 2016 NDSHS report.

Comparison with other collections

There are a number of nationally representative data sources available to analyse tobacco, alcohol and illicit drug data. Comparisons of data from previous waves of the NDSHS, the National Health Survey and the Australian School Student’s Alcohol and other Drug Survey show variations in estimates. Differences in scope, collection methodology and design may account for this variation and comparisons between collections should be made with caution.

The most common data sources used for reporting the use of tobacco, alcohol and other drugs by Indigenous Australians are the National Aboriginal and Torres Strait Islander Social Survey (NATSISS) and the Australian Aboriginal and Torres Strait Islander Health Survey (AATSIHS). Differences between the surveys vary considerably and include: the extent to which remote areas were surveyed; the age groups included; and the sample sizes. The questions asked in the surveys also differ considerably. Therefore results from these surveys are not directly comparable. It is important to keep this in mind when considering data from each of the surveys—results that may initially seem to contradict one another may be simply applicable to different groups within the population.

The 2014–15 NATSISS provides the latest data on Indigenous smoking rates. In 2014–15, 42% of Indigenous Australians aged 15 years and over reported being a current smoker (39% smoked daily and 3% less than daily). For Indigenous Australians aged 15 years and over, the rate of daily smokers declined by 6 percentage points between 2008 and 2014–15 (from 45% to 39%). For Indigenous Australians sampled in the NDSHS, even though the rate of daily smoking was much lower than that reported in the NATSISS, it declined by 7 percentage points between 2010 and 2016 (from 34% to 27%) and is declining at a faster rate than the NATSISS smoking rate. According to the NATSISS, Indigenous Australians who lived in Very remote areas were more likely to be current smokers in 2014–15 (53%) than those living in Major cities (36%). Between 2002 and 2014–15, the proportion of current smokers in non-remote areas declined from 50% to 39% but in remote areas, rates remained relatively steady (55% in 2002 and 52% in 2014–15) (AHMAC 2017). These same patterns were not seen in the NDSHS. According to the 2016 NDSHS, Indigenous people living in Major cities were more likely to be current smokers than Indigenous people living in Remote and Very remote areas—40% of Indigenous people living in Major cities were current smokers, compared with 29% living in Remote and Very remote areas. The lower smoking rate in the NDSHS is partly due to the much lower smoking rate reported by Indigenous people living in Remote and Very remote areas. This may be due to Aboriginal communities not being captured in the NDSHS sample.

Analysis

The alcohol risk calculation (lifetime risk and single occasion risk) was revised in 2013. In previous years (2010 and earlier), a very small proportion of recent drinkers who did not provide information on quantity of alcohol consumed were assumed to be ‘low risk’. In 2013, these drinkers were excluded from alcohol risk analysis and trend data was revised (2001 to 2010) and will not match data presented in previous reports. The denominators used to calculate a person’s drinking status and their alcohol risk levels are slightly different and therefore the proportion of ‘abstainers’ no longer equates to the proportion of those who ‘never drank’ and ‘ex-drinkers’ combined.

In 2016, the way the survey captured non-medical use of ‘Pain-killers/analgesics and other opioids’ changed to better reflect how these substances are used and understood in the community.

Specifically:

  • Over-the-counter non-opioid analgesics, such as paracetamol and aspirin, were removed from the section because they are not known to be misused for cosmetic purposes, to induce or enhance a drug experience, or to enhance performance.
  • The previously separate ‘pain-killers/analgesics’ and ‘other opiates/opioids’ sections of the survey were combined, to avoid capturing users of prescription pain-killers/opiates such as oxycodone in two sections.
  • Categories of analgesics are defined by their most psychoactive ingredient rather than their brand name, and brand names are only presented as examples. This brings the section in line with other pharmaceuticals captured in the survey.

Heroin (see illicit drugs) and methadone, which are also opioids, continue to be captured separately.

These changes represent a break in the time series for the both ‘pain-killers/analgesics’ and ‘other opiates/opioids’. As such, significance testing has not been carried out on changes between 2016 and 2013 data and making comparisons over time for these drugs types is avoided. Where time series data are presented, pain-killers/analgesics and opioids data have been combined in older years but are still not directly comparable to 2016.

As pain-killers/analgesics and other opioids are the largest contributing drug type to the pharmaceuticals total, significance testing has not been carried out on overall pharmaceutical misuse and any comparison across time must be interpreted with caution.

Two different estimates are presented for misuse of pain-killers/analgesics and opiates, and misuse of pharmaceuticals in the supplementary tables. Firstly, the 2001 to 2013 data were reanalysed to combine pain-killers/analgesics and opioids. Secondly, a new question was added to the 2013 NDSHS that asked respondents about their use of specific pain-killers/analgesics which enabled reanalysis of the 2013 pain-killers/other opiates data to exclude those respondents who had only used paracetamol, aspirin and other over-the-counter pain-killers. Both these estimates have been presented in the pharmaceutical tables.

There were no changes to the Tranquillisers/sleeping pills, Steroids or Methadone/buprenorphine sections of the questionnaire.

Source and reference attributes

Submitting organisation:Help on this term

Tobacco, Alcohol and Other Drugs Unit, Australian Institute of Health and Welfare.

Steward:Help on this termAustralian Institute of Health and Welfare

Relational attributes

Related metadata references:Help on this term

Supersedes National Drug Strategy Household Survey 2013 – Data Quality Statement AIHW Data Quality Statements, Archived 28/09/2017

My items Help on this term
Download Help on this term