[acb-hsp] Article re: improvements / decline after SSA terminated addictions benefits

J.Rayl thedogmom63 at frontier.com
Tue Jun 26 15:55:51 EDT 2012


General Course and Correlates of Improvement and Decline Following Termination of
the Supplemental Security Income Impairment Category for Drug Addiction and Alcoholism
by James A. Swartz , Zoran Martinovich
The federal legislation (PL 104-121) that ended supplemental security income (SSI)
disability benefits for drug addiction and alcoholism (DA&A) was a blunt instrument
of social change. For the first time in the history of the Social Security disability
programs, an entire impairment category was eliminated (Hunt & Baumohl, a, this issue).
Although some estimates projected that the great majority of DA&A recipients would
requalify for disability benefits under a different impairment category, only about
one-third did so (Hunt & Baumohl, a, this issue).
The legislation ending DA&A benefits did provide for an appeals process by which
former DA&A beneficiaries with medical or psychiatric disabilities could retain benefits,
but in practice the heterogeneity of the DA&A population went largely unaccounted.1
For example, though caricatured by a few well-publicized examples in the popular
media as being mainly conniving addicts gaming the system to obtain easy federal
money for drugs (Hunt & Baumohl, a, this issue), former DA&A recipients were quite
varied in this respect and in many other respects. While some DA&A beneficiaries
continued to use alcohol and illegal drugs while receiving benefits and no doubt
used their benefits to buy them (see Speiglman et al., this issue, and Podus, Chang
et al., this issue, for more details on the prevalence of active drug use at program
termination), others were actively engaged in drug treatment programs or had recently
completed drug treatment (see Swartz, Campbell et al., this issue) and were appropriately
using the additional income to pay for housing, food, and other necessities. Still
others were no longer actively using drugs but had chronic medical conditions (e.g.,
cirrhosis, hepatitis), a consequence of years of addiction, that left them too impaired
to work.
Indeed, Hunt and Baumohl (b, this issue) found in a series of interviews with a sample
of former DA&A recipients that some were surprised to be receiving benefits for a
drug addiction, thinking they had applied for a medical or psychiatric disability.
As Hunt and Baumohl discuss, individuals often were steered into applying for disability
benefits under the ambiguously defined DA&A category even though their substance
use was modest and/or unrelated to their disabling condition.
An attempt to classify the variation among former SSI DA&A beneficiaries comes from
a qualitative study conducted by Goldstein et al. (2000). In this Chicago study,
Goldstein and colleagues developed a simplified but compelling threefold classification
system. One group of former SSI DA&A recipients were seen as "hustlers" who worked
the federal and state systems for quick and easy money. They drank and used illegal
drugs but were quite capable of work, though they opted not to work as long as the
federal money was flowing. When the DA&A program ended, these individuals had a relatively
easy time generating replacement income, legally or illegally. Another group of individuals,
"lost souls," had many disabling social, medical, and psychiatric problems as a result
of long-term addiction, and their mental and physical incapacitation made it difficult
for them to reapply for benefits under a different impairment category or to gain
work. They were totally dependent on family and friends for assistance after program
termination; for those lacking such connections, the outlook was especially grim.
The third group consisted of what Goldstein et al. termed "good citizens": those
who appropriately applied for benefits because their addiction made it difficult
for them to sustain employment, but who had every intention of entering drug treatment
and who saw their receipt of federal benefits as temporary.
The wide range in age of former SSI DA&A beneficiaries-from 21 to 64 years old-was
another relevant source of heterogeneity. Most beneficiaries in their early to late
20s would have been addicted for less than 10 years (if they were ever addicted)
and thus were less likely to have suffered the more debilitating mental and physical
effects of long-term chronic drinking or illegal drug use. DA&A beneficiaries in
this group may have been among the most likely to be actively drinking and using
while receiving federal benefits and the most likely to continue after their loss
of benefits, given the typical "careers" of heavy drinkers and illegal drug users.
They may also have been among the most likely to recover quickly and completely from
any addiction-related impairments they did have. Alternately, those in their 40s
and older might have begun the process of "maturing out" or "burning out" of their
drinking and illegal drug use, substantially reducing or ceasing their intake while
nevertheless contending with physical and psychological sequelae of use not yet burdening
their younger counterparts. Their recoveries from addiction-related impairments may
have been more difficult and perhaps impossible, given certain advanced or chronic
conditions.
Given these important sources of variation, it is reasonable to expect considerable
variation in response to the loss of SSI benefits. While some former DA&A beneficiaries
might adapt quickly to the loss by requalifying for continued benefits under another
impairment category, by gaining employment, or by relying on family and friends for
food, housing, and other necessities, others might be adversely affected in any of
a number of different areas of functioning. Then too, given that residual state-subsidized
safety nets also varied from state to state (e.g., some states continued to provide
Medicaid coverage after termination of SSI disability benefits or access to publicly
funded drug treatment, but others did not; some states had General Assistance programs,
but others did not), difficulties associated with the loss of federal benefits likely
also depended on the state or even the county of residence.
Other papers in this issue have examined the effects of lost DA&A benefits on one
or two areas of functioning. For example, Podus, Barren et al. (this issue) studied
how loss of Medicaid and health insurance benefits affected access to health care.
Norris et al. (this issue) examined the degree to which benefits termination affected
the acquisition of food and housing. Both studies found variation in functioning
in the respective areas studied, though loss of medical insurance and the inability
to replace lost cash benefits were common important determinants of who fared better
or worse. Swartz, Martinovich, and Goldstein (this issue) assessed criminality among
former DA&A beneficiaries and found elevated rates of self-reported criminal offending
among those who lost benefits and were frequent users of heroin and cocaine. Again,
failing to replace lost SSI income was an important determinant of outcome.
In this paper, we examine the responses of former DA&A beneficiaries to the loss
of benefits across several areas of functioning and across time. Our goal was to
look at how termination of the DA&A program affected individuals generally and to
understand the degree of variation in response to lost benefits. We wanted to determine
what proportions of individuals who lost benefits improved, remained relatively the
same, or declined across and within a broad number of functional areas. Second, and
related, we wanted to determine the individual and social correlates of improvement
and decline, given the expected variations in adaptability.2
Method
This report is based on data collected as part of a federally funded multisite, two-year
prospective study to examine the social, medical, legal, financial, and psychological
consequences of terminating DA&A benefits. The sites, identified by the name of the
largest city within each area, are Chicago, Detroit, Seattle, Portland (OR), and,
in California, San Jose, Los Angeles, Stockton, Oakland, and San Francisco. Subject
selection, recruitment, sample characteristics, the survey instrument, the collection
protocol, and sample weighting have all been described in more detail elsewhere (see
Swartz, Tonkin, and Baumohl, this issue).
Subjects
The full sample consisted of 1,764 DA&A beneficiaries, interviewed at baseline between
December 1996 and April 1997. Subjects were eligible for the study if they received
SSI DA&A benefits in 1996, were between 21 and 59 years of age, and did not receive
Social Security disability insurance concurrently with SSI (see Hunt & Baumohl, a,
this issue; Swartz, Tonkin, and Baumohl, this issue). Subjects were reinterviewed
every six months over two years for a total of up to five assessments. The aggregate
24-month follow-up rate was 82%, with 1,444 of the 1,764 subjects interviewed at
baseline completing all five interviews. Because our statistical models required
that subjects complete at least three of the five assessments, we excluded 92 subjects,
leaving a full analytic sample of 1,640. Of these, 1,425 were assessed at all five
waves, 158 at four waves, and 57 at three waves.
Weighting strategy
We applied a three-step weighting procedure to the data to obtain more accurate population
estimates, to correct for sampling differences among the sites, and to conduct statistical
tests at appropriate power levels (see Choudhry & Helba, this issue, for more detail
on the weighting procedures). Based on estimates of the socio-demographic variable
mix within each site, we weighted subjects to approximate the target population at
each site more closely. Next, we weighted subjects based on the size of the target
population within each site to correct for differences in sampling ratios across
sites. Finally, based on the first two weighting steps, we calculated a normalized,
adjusted sample size. The normalized sample size is simply a linear transformation
of the combined adjusted weights that maintains the observed sample size while reflecting
the influence of the two weights. Since the normalized, adjusted weights yield the
same estimates as the adjusted weights without inflating the sample size, we based
all inferential statistics on these weights.
Instrument
At each assessment, project staff interviewed subjects using a questionnaire developed
for this study and described in detail by Swartz, Tonkin, and Baumohl (this issue).
The questionnaire covered multiple areas of functioning and, on average, took one
to two hours to administer. In addition to the seven functional areas discussed below
and covered in this study, the questionnaire also included questions on requalification
for SSI under a different impairment category, experiences with representative payees,
housing and housing stability, food acquisition and hunger, and income and employment.
Dependent variables
We based our analyses on data derived from seven sections of the assessment instrument
to create the following scales: alcohol and drug use (DRG), medical conditions (MED),
psychiatric conditions (PSY), criminality (CRM), victimization (VIC), activities
of daily living (ADL), and family conflicts (FAM).3 We used these seven scales to
create the primary dependent variables in our analyses, using a series of steps that
ultimately classified subjects as generally showing improvement, decline, or no change
across each of the seven functional areas and across time.
In the first step, we computed a scale score for each subject for each scale for
each follow-up interview. For each scale, we summed individual items to compute a
total scale score. If more than one of the items comprising a scale were missing,
we set the scale score to missing. Fewer than 1% of the subjects interviewed had
a missing score on any scale for any interview wave. Cumulatively, the FAM scale
had the highest proportion of missing scores across follow-up interviews, 2.5%; all
the remaining scales had fewer than 2% missing scores.
Table 1 shows scale definitions and Kuder-Richardsons-20 (KR-20) for each scale calculated
at baseline. KR-20, a measure of the internal consistency of the scale, ranged from
.34 for the criminality scale to .79 for the activities of daily living scale. Analyses
of item-to-scale reliability across waves yielded similar alphas (not shown), indicating
that the baseline measures of internal consistency were stable across time. Two scales,
the alcohol and drug use scale and the crime scale, had item-to-scale reliability
scores that did not meet the generally accepted minimum criterion for internal consistency
of .60. This means that the items comprising the alcohol and drug scale and the crime
scale do not measure unidimensional constructs. This is not surprising, however,
as these scales are more like inventories than unitary scales. For example, we would
not expect someone who uses heroin frequently to also use amphetamines frequently,
nor would we expect that someone who shoplifts also commits other crimes such as
forgery at the same rate.
We also examined the pattern of correlations among the scales at baseline to determine
the extent to which they measured independent constructs. Table 2 presents the results
of this analysis. Inspection of Table 2 reveals that many of the correlations were
significant, indicating some degree of shared variance among the scales. However,
most of the significant correlations (10/17) were low, ranging from .10 to .25, meaning
that despite being statistically significant, these scales shared at most 6% of their
variances. Among the remaining significant correlations larger than .25, four of
seven were attributable to the PSY scale, which significantly correlated with the
ADL, MED, VIC, and FAM scales. This suggests that among our subjects, psychological
problems were associated with problems in other areas. Despite the number of significant
findings, however, the proportion of shared variance among most of the scales was
relatively low, excepting the PSY scale. Thus, while the scales did not completely
measure independent constructs, the degree of overlap was modest.
In the second step, to assign subjects to one of the three change categories, we
classified change scores from baseline to any wave as showing meaningful increases
or declines if a change of 34 percentile rank points was indicated based on differences
in the sample percentile ranks. In a normal distribution, 34 percentile points constitutes
a change of one standard deviation from the distribution average and is conventionally
regarded as a large effect size (cf. Cohen & Cohen, 1983). We used this effect size
in order to be conservative in our assessments of when a subject actually made a
change on one indicator. Moving one standard deviation up or down on a scale likely
indicates a real and clinically significant change on that scale as opposed to measurement
error or real but nonclinically significant changes in functioning.
Since distributions for the present scales varied in the extent of deviations from
normality, no normality assumption was made, and a 34-percentile criterion was applied
based on the actual baseline distribution, with adjustments for scores moving to
or from a scale score of zero. If the change between two scores entailed a difference
from or to the floor on a measure (i.e., a score of "0"), percentile rank norms based
on the full sample at baseline were used. If both the baseline and the subsequent
wave score were within the range of "at least some" problems on the scale, norms
based on the baseline sample with "at least some" problems were used.
Based on these criteria, we classified the observed changes at each follow-up wave
relative to baseline as substantially declining (i.e., showing poorer functioning
due to a greater reported number of symptoms, problems, etc.), substantially improving
(i.e., showing better functioning as indicated by fewer reported problems), or unchanged.4
Table 3 reports the weighted percents for these events by wave and by scale for the
study sample measured on at least three occasions (including baseline). In general,
the percentage of subjects improving was greater than the percentage declining across
all scales and across all interview waves. One noteworthy exception was the FAM scale.
For this scale, declines in functioning outpaced improvement from baseline to wave
2, though this pattern reversed direction by wave 3, and the reversal was maintained
at waves 4 and 5. Similarly, for the CRM scale the tendency for improvements to outpace
declines (i.e., committing fewer crimes) was not statistically significant until
waves 4 and 5. However, for all scales the relative odds of showing improvement were
greatest and statistically significant at the two-year follow-up interview.
In order to describe the pattern across waves for a given scale, we next counted
the number of declines and improvements (using the same 34th-percentile criterion)
from waves 2 to 5 relative to baseline. Table 4 details the weighted percentage of
cases showing (1) two or more improvements and no declines, (2) one improvement and
no declines, (3) a rare mixed pattern, ending on an improvement, (4) no changes,
(5) a rare mixed pattern, ending on a decline, (6) one decline and no improvements,
or (7) two or more declines and no improvements. In general, change on a given scale
tended to be in a single direction. Very few cases showed both a substantial decline
at one wave and a substantial improvement at another wave.
Cases were classified on a given scale as showing "sustained improvement" if two
or more improvements and no declines occurred relative to baseline, and as showing
"sustained decline" if two or more declines and no improvements occurred. To describe
general tendencies toward improvement or decline, we cross-classified the number
of scales showing sustained declines and the number of scales showing sustained improvements.
Table 5 lists the weighted total percentage of cases in each cell of this cross-classification:
Although the general tendency across scales was toward improvement, the data in Table
5 suggest that for a substantial subset of cases, sustained improvements on some
scale(s) co-occurred with sustained decline on other scale(s). To retain some of
this complexity in the final classification scheme, we collapsed the cells in Table
5 into one of five overall outcomes:
(1) "No Sustained Change"-All scales, or all but one scale, showed no sustained increase
or decrease (38.2%).
(2) "Worse"-Two or more scales showed a sustained decline, and no scales showed a
sustained improvement; or three or more scales showed a sustained decline and not
more than one scale showed a sustained improvement (9.2%).
(3) "Better"-Two or more scales showed a sustained improvement, and no scales showed
a sustained decline; or three or more scales showed a sustained improvement and not
more than one scale showed a sustained decline (32.1%).
(4) "Mixed, Mostly Unchanged"-At least one scale showed a sustained decline and at
least one scale showed a sustained improvement, but most scales remained unchanged
(16.8%).
(5) "Mixed"-At least two scales showed a sustained improvement and at least two scales
showed a sustained decline (3.6%).
The modal grouping for participants (38.2%) was "Unchanged." Among those who did
change relative to their baseline status, participants were 3.5 times more likely
to be improved (32.1%) than to be worse (9.2%); however, one in five participants
showed a mixed pattern of sustained improvement on some scales and sustained decline
on others.
Table 6 describes the percentage of cases with sustained decline, sustained improvement,
or no change on each of the seven scales for the five global outcomes groups. Subjects
in the "No Sustained Change" group may have shown a sustained decline or improvement
on one, and only one, scale. The probabilities of improvement were somewhat higher
for PSY (p=.019), VIC (p <.001), and FAM (p <.001) scales based on sign tests; other
scales were equally likely to show occasional declines or improvements. Subjects
whose scales tended to worsen showed the greatest declines on DRG, VIC, FAM, and
CRM scales (with 59% to 48% increasing on each scale), a moderate likelihood of worsening
on MED and ADL scales (22% and 34%), and a relatively low likelihood of worsening
on the PSY scale (16%). Unlike subjects in the "Worse" category, for subjects in
the "Better" category, improvements were less uniquely linked to certain subsets
of scales. For these subjects, the percentage of scales decreasing followed a relatively
narrower range, from a high of 53% (PSY) to a low of 35% (MED). Among subjects in
the "Mixed, Mostly Unchanged" category, sustained improvements were relatively more
likely on PSY (p=.003), ADL (p=.010), and MED (p <.001) scales. However, sustained
declines were relatively more likely for FAM (p=.026). Among subjects in the more
distinctly "Mixed" category, sustained improvements in PSY (p <.001) were more likely,
and sustained declines in CRM (p <.001) were more likely.
Correlates of change status
We generated a series of additive, binary logistic models to assess the effects of
relevant predictor variables on the five-category global outcomes variable based
on cross-classifying the number of sustained increasing and decreasing scales. We
generated five logistic functions predicting the odds of remaining the "Same," becoming
"Worse," becoming "Better," becoming "Mixed, Mild Threshold," and becoming "Mixed,
Severe Threshold." Since the two mixed categories are on an ordinal "mixed" continuum,
odds ratios from the "Mixed, Mild Threshold" function refer to odds of falling in
either mixed category. All other odds ratios refer to the odds of sampling an outcome
in a single category.
Predictor variables: covariates
(1) Average Baseline P-Rank: Baseline percentile ranks for each case were averaged,
and this average was included to adjust for differences in improvement/decline attributable
to floor/ceiling effects or regression to the mean.
(2) Demographic Variables: A number of demographic variables were included and coded
as follows:
a. Sex (dummy coded, reference category = female).
b. Marital status (dummy coded, reference category = never married).
c. Age (in years).
d. Ethnicity (dummy coded, reference category = white).
e. Education (dummy coded, reference category = high school graduate).
(3) Site: Study site was included in the model as an "effects coded" predictor (Cohen
& Cohen, 1983). This strategy yields inferential tests comparing adjusted odds for
each site with weighted odds in the full target population.5
(4) Overall Government Assistance Pattern (OGAP): We classified subjects in the full
analytic sample into one of four groups based on whether they had requalified for
SSI under another impairment category or had demonstrated other "substantial income
replacement" ([SIR]; i.e., legal income greater than or equal to 75% of their baseline
income). The variables were dummy coded. Pair-wise comparisons of all OGAP groups
were conducted.
Criteria for OGAP categories
Group 1: Substantially ON SSI (ON)
These subjects reported retaining a constant level of government support across the
entire study. For those subjects assessed at five waves, this category included all
subjects on SSI at four of the five waves (n=574 of 1,425). For subjects assessed
at four waves, this category also included all subjects on SSI for at least three
of the four waves. However, if the subject was off SSI at only one assessment, we
required that the "missing" assessment be nested between waves during which the subject
was on SSI (n=50 of 158). Subjects assessed at three waves had to be on SSI at every
assessment (n=18 of 57).6 The total unweighted n for this group was 642 (39% of 1,640).
Group 2: OFF SSI, but with Substantial Income Replacement (OFF/SIR)
These subjects reported losing SSI, but also reported they were consistently able
to replace it through other legal income (e.g., employment, Temporary Assistance
for Needy Families). For those subjects assessed at five waves, this category included
all subjects on SSI at not more than one wave and earning at least 75% of baseline
income (i.e., SIR) through legal means at all or all but one of the remaining waves
(n=182 of 1,425). For subjects assessed at four waves, the same criterion was applied,
with the additional stipulation that if the person was without either SSI or SIR
at any wave, any "missing" assessment was nested between periods with SIR (n=14 of
158). For subjects assessed at three waves, once again, the subject had to be on
SSI at not more than one wave, but the presence of SIR was required at the other
two assessments (n=5 of 57). The total unweighted n for this group was 201 (12% of
1,640).
Group 3: OFF SSI, and No Substantial Income Replacement (OFF/NSIR)
These subjects lost SSI and were not consistently able to replace their lost cash
benefits through other legal income. For those subjects assessed at five waves, this
category included all subjects on SSI at not more than one wave and without SIR at
two or more of the remaining waves (n=467 of 1,425). Once again, for subjects assessed
at four waves, the same criterion was applied, with the additional stipulation that
if the person was without either SSI or SIR at any wave, any "missing" assessment
was nested between periods without SIR (n=49 of 158). Subjects assessed at three
waves had to be on government support at not more than one wave and without SIR at
one or more of the remaining waves (n=17 of 57). The total unweighted n for this
group was 533 (32% of 1,640).
Group 4: Partial Loss of Government Support (PART)
Subjects in this residual category did not meet the criteria for inclusion in any
of the three previous groups. They tended to show a less stable assistance and employment
pattern, and they moved between receiving some SSI or achieving SIR for only one
or two assessment points only to lose benefits or earn less money at a subsequent
assessment point. This group included 264 (16% of 1,640) of the subjects in the full
analytic sample (202 of 1,425 with five waves, 45 of 158 with four waves, and 17
of 57 with three waves).
Results
Table 7 reports odds ratios for being Worse, Better, Mixed (mild threshold) or Mixed
(severe threshold). With respect to covariates, baseline average percentile rank
predicted worsening and improvement (as expected, a regression-to-the-mean effect).
Controlling for this and other covariates, older subjects were more likely to remain
unchanged and less likely to show a worsening pattern. Subjects who had completed
high school were more likely to remain unchanged than subjects who only partially
completed high school and those who went on to higher education. Subjects who did
not complete high school were 3.2 times more likely to show a substantially mixed
course (with most scales changing, but in different directions). Subjects educated
beyond high school were more likely to show a mixed course meeting the less stringent
mixed criterion (1+ scales both increasing and decreasing). Controlling for model
covariates, SSI loss patterns, etc., black subjects were less likely to remain unchanged
and 1.65 times more likely to show a better course than white subjects.
With respect to marital status, currently married subjects were 2.8 times more likely
than never married subjects to show some kind of change. Married, separated, and
divorced subjects were 2.1, 1.7, and 1.8 times more likely, respectively, to show
a worse course than never married subjects. With respect to site, compared with subjects
in the full study target population, subjects from Chicago were 2.1 times more likely
to remain unchanged, less likely to become better (OR=.59) and less likely to show
a mixed response pattern (OR=.68). Subjects from the full target population were
2.2 times more likely to show a worsening pattern than subjects from Detroit. Finally,
subjects from Oakland were 1.6 times more likely to show a mixed response pattern.
Table 7 includes OR's associated with six pair-wise comparisons of subjects based
on their pattern of SSI loss or retention and their ability to replace at least 75%
of their lost income during the follow-up period. Subjects who lost SSI but substantially
replaced the income (OFF/SIR) were less likely to worsen (OR=.42) and were 1.48 times
more likely to be better off than subjects who remained ON SSI. Subjects who lost
SSI and did not replace this income (OFF/NSIR) were less likely to remain unchanged
than subjects remaining ON SSI (OR=O.65); however, whatever change occurred did not
have a consistent direction or yield a statistically significant effect. Among subjects
showing some degree of disruption of SSI support, OFF/SIR subjects were least likely
to show a worsening pattern. NSIR subjects were 3.3 times more likely to worsen than
OFF/SIR subjects, and subjects who experienced a partial disruption in SSI support
(PART) were 2.4 times more likely to worsen than OFF/SIR subjects.
Table 8 reports the simple weighted percentages of cases in each of the five global
outcomes categories and the weighted percentages after statistically equating groups
on logistic model covariates. In Table 8, "simple weighted percentages" refers to
estimates of the actual percentage of cases in each group in the target population,
and "adjusted weighted percentages" treats each group as if the distributions on
model covariates were at average levels for each group. Overall, unadjusted outcomes
distributions for OGAP categories roughly matched distributions adjusting for model
covariates, generally repeating the pattern of findings reported in Table 5. Without
adjusting, OFF/SIR subjects were once again least likely to be "Worse" and OFF/NSIR
were least likely to remain the same, but with no clear direction to the effect for
this group. Although some group differences emerged significant, whether adjusting
for model covariates or not, all OGAP groups showed a distinct bias toward sustained
improvement rather than sustained deterioration.
Discussion
Contrary to our expectations, our results indicate that the termination of SSI DA&A
benefits did not cause a high degree of declines in functioning for the majority
of former SSI DA&A beneficiaries included in our study. Even though many were not
able to replace their lost income (cf. Campbell, Baumohl, and Hunt, this issue) and
had unstable housing situations (cf. Morris et al., this issue), their material hardships
did not translate into large functional declines in the areas we examined. In fact,
we found the opposite: Most former DA&A recipients, 80%-90%, were either unchanged
from the time they lost their benefits or had actually improved somewhat over time.
This benign outcome is not unalloyed, however. A small but significant proportion
of individuals showed sustained functional declines over time, especially in the
areas of use of alcohol and other drugs, victimization, and family conflicts. Of
these three areas, family conflicts tended to be the most problematic soon after
benefit termination but lessened thereafter. The individuals most likely to show
declines were those who lost their SSI benefits and failed to replace at least 75%
of their lost income.
Other findings were somewhat less dramatic or were inconsistent. There were few site
effects, except those noted for Chicago and Detroit, suggesting that the major trends
of improvement or no change among the majority of former DA&A recipients were common
across study sites. We should also note that although the sampling frame for the
multisite study represented about 26% of all former DA&A beneficiaries, the sample
did not represent the DA&A population nationally (Wittenburg et al., this issue).
Thus our findings may not generalize to former DA&A recipients living in other areas
of the country. We also did not assess several areas where declines in functioning
might have been more evident (see note 3). For example, Norris et al. (this issue)
found that a significant number of former DA&A beneficiaries had problems attaining
stable housing after losing their benefits.
For reasons that are hard to explain, married subjects and those who were divorced
were more likely to fare the poorest and less likely to maintain the same level of
functioning after benefit termination. Perhaps this result is due to the additional
financial obligations of providing for a family. Interestingly, African-American
subjects were among the most likely to report improved functioning over time. Older
subjects tended to report stable functioning and to have a smaller chance of decline
than other subjects. This result is also contrary to what we expected. As discussed,
we thought that older subjects would have greater difficulties adapting to the loss
of income because of more chronic medical conditions and a longer history of drinking
and other drug use and impaired functioning. However, the data clearly did not support
this expectation. Older subjects were more likely to maintain a stable level of functioning
over the course of the study, while younger subjects had an increased chance of worse
functioning. It may be that younger subjects had more problems with continuing or
increasing their drug use and experiencing more incidents of victimization following
the loss of benefits.
Among the areas of functioning examined, use of alcohol and other drugs increased
most in problem severity. This is contrary to the belief of detractors of the DA&A
program that DA&A beneficiaries used most of the their federal cash benefits to purchase
alcohol and illegal drugs (see Hunt and Baumohl, a and b, this issue). If this were
true, the loss of income should have resulted in further declines. However, among
those who did the worst, those who lost benefits and did not replace their income
(at least legally) showed the greatest increase in alcohol and other drug use. Obviously
factors besides the availability of public assistance money drove rates of substance
use in this population. One such factor may have been the loss of the mandate to
participate in treatment, which, as shown by Swartz, Campbell et al. (this issue),
resulted in substantial declines in treatment participation over the course of the
study. If treatment participation suppressed alcohol and other drug use, its premature
end may have been responsible for the increases in use. It may also be that the loss
of a representative payee to monitor expenditures allowed those inclined to drink
heavily or use illegal drugs a greater freedom to spend their money this way.
It is not clear why termination of the DA&A program did not result in a greater degree
of problems for more individuals. The data collected for the multisite study do not
permit an empirical assessment of this issue. In retrospect, there are many possible
explanations, but all of them are conjectural. For example, it may be that individuals
were simply more resilient than expected. Then too, we strongly suspect that the
historically unprecedented period of economic expansion during the study period softened
what might otherwise have been a much harder landing. Although few former beneficiaries
found work to replace lost income, it's likely that during this time of relative
abundance their relatives and friends could more easily share housing and other resources.
As our findings on family conflict suggest, however, this may have had significant
emotional cost.
We have to consider as well the possibility that methodological effects biased our
findings. It is possible that when the study began, some individuals exaggerated
their problems under the mistaken assumption that the study was somehow related to
their retention of benefits. As the series of interviews unfolded, they may have
seen that there was no real benefit to exaggerating their symptoms, and the resultant
more realistic assessments from later time-points would have trended in the positive
direction. Adding to the likelihood of this scenario is that the survey form itself
was constructed in such a way that answers indicating problems in a certain area
resulted in a greater number of questions being asked in that area. As subjects participated
in more interviews, they would have certainly become savvy about this pattern. Those
who wanted to have the shortest interview possible would know that if they denied
having problems, they would be asked fewer questions and end the interview sooner.
If this change in response pattern occurred among enough subjects, it could account
for some of the general trend toward improvement seen in almost all of the functional
areas assessed in the multisite study. Finally, we used a rather conservative measure
of change, percentile points equivalent to one standard deviation.
Some members of the SSI Study Group (see Swartz, Tonkin, and Baumohl, this issue)
thought that the baseline-data collection period did not reflect a true baseline
and that many individuals had already begun to adapt to the loss of benefits even
before the checks stopped coming. According to this line of reasoning, the baseline
data collected for our study did not represent a true bottoming-out in terms of functioning,
which would have occurred some months prior to termination of the DA&A program. However,
if this were correct, the gains that we saw over time would be even more pronounced
had we had a "true" baseline showing greater problems in functioning. Others in the
Study Group thought that the assessment instrument, developed specifically for this
study and not validated against established instruments, was flawed because it was
not especially sensitive to change. However, since the instrument did detect changes
in the positive direction, one would have to argue that it was relatively more insensitive
to negative change; possible, but not especially compelling in the absence of any
supporting evidence, and given that the modal score on most scales was zero (indicating
"no problems" in a functional area). Thus, if anything, the scales were biased against
finding change in the positive direction, though this was the predominant direction
of change among those who evidenced change.
Even allowing that some of these methodological issues affected study outcomes, it
remains striking that we did not find more problems and more increases in problem
severity over time in a population many thought to be poorly equipped to handle loss
and stress. Nevertheless, our study suggests that as a general matter, termination
of the DA&A program, on average, did not produce dire social and personal consequences.
Moreover, we also found, contrary to expectations, that a not insignificant proportion
of individuals replaced their lost income and showed sustained improvements, often
in several areas of functioning. And, for reasons not discernible from our data,
some people who did not replace their lost income also avoided precipitous declines
in functioning. In a number of important ways, then, the legislation ending the DA&A
program achieved its goals of trimming the disability rolls while not causing undue
harm to the majority of those affected. The data also indicate, however, that a small
but significant group of individuals with the most problematic alcohol and other
drug-use problems had the poorest outcomes and appeared to have suffered a decline
in functioning and an increase in drug-use problems upon program termination. For
these individuals, lost to an extent in the statistics, loss of DA&A benefits presented
very serious problems and few options for remediation.
-1-
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication Information:
Article Title: General Course and Correlates of Improvement and Decline Following
Termination of the Supplemental Security Income Impairment Category for Drug Addiction
and Alcoholism. Contributors: James A. Swartz - author, Zoran Martinovich - author.
Journal Title: Contemporary Drug Problems. Volume: 30. Issue: 1/2. Publication Year:
2003. Page Number: 425+. © 2003 Federal Legal Publications, Inc. Provided by ProQuest
LLC. All Rights Reserved.
Next Page

Jessie Rayl
thedogmom63 at frontier.com
www.facebook.com/Eaglewings10
www.pathtogrowth.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.acb.org/pipermail/acb-hsp/attachments/20120626/ca4b799f/attachment-0001.html>


More information about the acb-hsp mailing list