Tuesday, May 29, 2012

And another one, and another one bites the dust

It started Sunday, when my sensor glucose reading was 171. It alarmed: beep gr crunk. Sounded like gears that weren't perfectly aligned, grinding. Over the past couple of days, the alarms have sometimes vibrated, sometimes beeped, sometimes made a grinding sound, and most often hasn't alarmed at all. I called Dexcom and they're sending me another receiver. I used my first Dexcom receiver for five and a half months, until on Valentine's Day it gave me sensor errors and quit. RIP Dexcom I 9/1/10-2/14/11 I used my second Dexcom receiver for less than three months; the beep stopped working. RIP Dexcom II 2/15/11-5/6/11 I used my third Dexcom receiver for five months; then it gave me sensor errors and quit. RIP Dexcom III 5/8/11-10/10/11 My fourth Dexcom receiver, I've been using for seven and a half months, but this sensor session will be its last. Goodbye Dexcom IV 10/15/11-5/31/12 (I hope it makes it to the 31st- the sensor will need changing then). I have read numerous people's claims that their Dexcom receivers are still working well after two or three years. I wonder if I'm just putting more wear and tear on mine by checking them frequently, or dropping them (yes, I admit to dropping mine), or if there's something in the Chicago heat and humidity, or the living in the basement humidity, that's an issue.

Thursday, May 24, 2012

I just saw a blurb on nocturnal hypoglycemia in type 1 diabetics recently. The study it describes had thirty seven type 1 adults wear CGMs and heart monitors for three nights. During those three nights, a total of 18 hypoglycemic episodes were recorded, where hypoglycemia means a blood sugar of less than 63 mg/dl. For each episode of hypoglycemia, they took data from a period from the same time of night and same patient when blood sugar was at least 72 mg/dl. Then they compared data between the hypo periods and the non hypo. I was interested to see that heart rate was essentially the same. The average heart rate during hypoglycemia was 62 bpm, and for non-hypoglyemia, 63 bpm, with a standard deviation much larger than 1. This was interesting to me because I like to lie in bed, waiting to fall asleep, watching my clock, with my Dexcom receiver over my heart, and counting my heart beats. The receiver acts a little like a stethoscope; it really magnifies the sound of my heart. Also I can feel it through the Dexcom. When I go hypo, my pulse tends to feel faster, but I count the same number of beats, which I think says something interesting about my perception of time when I'm hypo. On an unrelated note, all of my labwork from Monday's visit came back normal. My blood sugar on the labwork was 129 mg/dl; my Dexcom at that moment was showing 108 with an up arrow, and I didn't test my blood sugar on my meter because I'd treated a hypo about an hour earlier and suspected my blood sugar was moving fast.

Monday, May 14, 2012

Gangsta Diabetes by Adam Cole

I first wondered what the lyrics were to this song more than three years ago, soon after it made its internet debut. I even wrote and asked what the lyrics were, didn't get a reply. My attempt at transcription is below the video. The song begins about 45 seconds in:



Type 1 diabetes mellitus

it goes

yo type 1 diabetes really does suck
takes the insulin right out of us
now there's way too much glucose and fatty acids
 in our blood from what the basal secretion
of what epinephrine and glucagon does

basal secretion y'all, that's the trick
when insulin's gone,
what's going on...

with insulin gone, glucagon got the Glock
it be popping and locking up to make it stop
so when in the liver glucagon
 turns a lot of things on
like ketone body synthesis and fatty acidation
fats burn down to acetyl CoA
which in the normal condition turns to the TCA [Kreb's Cycle]
but with insulin missing ya gotta find another way
to make ketones descend from the sexy bod-ay

yeah yeah right here angelina wiki wiki

glucagon isn't gone
there's two more on its list
like gluconeogenesis and glycogenolosis
making way more gluc than you need to exist
that's why type 1 diabetics have sugar in their piss

gluconeogeneis is also low acid
so using it up kills the TCA
to regress, it's a mess when acetyl CoA
don't have any other option than ketone bod-ays.

yo an maybe some other stuff, but like what's important is it forces it down

now listen, this gets tricky:

I think I know why,
with our insulin gone,
there's so many ketones and glucose inside our circula-shon
but there's another component I'll take a moment to address
our adipocytes think our body is under stress

with insulin gone, glut4's MIA
we really got basal glucose and glucose plasma in the brain
you know how many more glucose to make,
glycerol 3 phosphate
which adipocytes use to know our energy state

above it all cortisol kills neoglucogenesis
so without glycerol kinase we can't make that shit
now the adipocyte is out of glycerol 3p
so it's thinkin' that the body really needs energy

what, un, even though we're swimming in it.
Got all kinds of glucose
yeah right skeleton man

so even though we're swimming in glucose
from the basal secretion
keepin glucagon and epinephrine
I don't give a beatin
now epinephrine's a player, insulin's out of town
so we've got a bottle of henessy, with the hsl [hormone sensitive lipase]

hormo sensitive lipase, he's without insulin
cut and dry, got the drop on glyceride like oJ simpson
Adipocytes are really out of control
sending free fatty acids like there's no tomorrow

yeah, to the blood, what, that's a triple hit for them
they got no glucose, they got no glycerogenesis
it's over, ba bang, ba bang, ba boom, that's right, it's got nerve yall

to summarize, here goes

summarize, it's no surprise I'm pulling the rap
summing biochemistry like it was on tap
from the med school equations
but never hatin relatin
gotta examine the day but take time to keep ratin'

diabetes mellitus
everything is a miss
it's not just the disease that gives you sugar piss
it's the major terrain of a metabolic state
affectin' both fatty acids and carbohydrates

yo lame dance y'all what
 bring it

Sunday, May 13, 2012

Other Statistical Measures

One of the major problems with all of the statistical measures I discussed a few posts back- all of the statistical analyses that the Dexcom software does with my numbers- is that they don't consider the following two hours to be different at all:

Hour A: 90, 100, 120, 128, 119, 104, 98, 92, 88, 84, 79, 81

Hour B: 79, 81, 84, 88, 90, 92, 98, 100, 104, 119, 120, 128

All of us who've had hours like these know that there's a huge difference. With hour A, you don't know what to do- your blood sugar zoomed up 38 points in 20 minutes, then it dropped 49 points in the next half hour or so. With hour B, your blood sugar just kept going in the same direction, a 49 point rise, and you are either injecting to correct it or keeping your finger on the insulin waiting to see if it rises further- you know what you're watching.

But as far as Dexcom is concerned, hours like these- days like these- are the same. Your blood sugar average is the same, your standard deviation is the same, your high and your low and interquartile range and all these supposed measures of glycemic variability- they're the same.

It's no use complaining about these things if you can't suggest something better. Fortunately, I can. I suggested it a few posts back. I suggested looking at the average of the absolute value of the first derivative of blood sugar. This basically measures how far apart any two adjacent sensor readings are likely to be. First derivative, by the way, just means slope. Using the absolute value just means that a rise is counted the same as a fall- otherwise the average first derivative would be very close to zero for anybody with decent blood sugar control, where by decent I mean good enough to stay out of DKA. I suggest as a unit of measure change in mg/dl per five minutes, although you can certainly use any other measures of blood sugar and time that you want.
This probably sounds kind of difficult to compute, but actually it's not bad. You can do it in two ways.

The hard way: look at each data point, compute the difference between each two, add it up, and divide by the number of time intervals. For hour A above, that would mean the differences are 10, 20, 8, 9, 15, 6, 6 , 4 ,4, 5, 2, the sum is 89, there are 11 five minute intervals, so the score is 8. 09. For hour B above, that would mean the differences are 2, 3 4, 2, 2, 6, 2, 4, 15, 1, 8, the sum is 49, and dividing by 11 gives a score of 4.45. That is good; the hour with more variation had a higher score.

The easy way, something you can estimate with just a glance at your 24 hour screen, is to divide the screen into lines of up and down and just add up the differences between adjacent high points and low points. Looking at hour A, that'd mean breaking it into the rise of 90 to 128, the drop of 128 to 79, and the rise of 79 to 81; 38+ 49+2, which fortunately gives a sum of 89 again, to be divided by 11. Looking at hour B is even easier because it's all one line, from 79 to 128, which is a difference of 49, to be divided by 11.

Here is how you'd estimate it looking at a 24 hour screen (and this is much more doable with minimed which lets you scroll through data than it is with Dexcom which doesn't). Here is a 24 hour screen I took a picture of a while back:
We're going to do a very rough estimate- and it's going to be a low ball estimate- of my average absolute value of the first derivative. First, we break it up into trend lines. The first trend line looks like a drop from roughly 300 to roughly 80, the second line is fairly flat around 80, the third is a rise from 80 to roughly 280, the next is a drop from 280 to 150, the third is a rise from 150 to 180, the next is a drop from 180 to 50, the next is a rise from 50 to 210, the next is a drop from 210 to 70, and the last is a rise from 70 to 80. Adding the differences on these trends, I get 220+0+200+130+30+130+160+140+10 = 1020. The number of 5 minute intervals in a 24 hour period is 287, so I divide 1020/287 and get roughly 3.6. If I was using the software to break up the lines, I would get a higher number because there are are more drops and rises than I really accounted for, but 3.6 is a reasonable lowball estimate.

I believe that this measure is a much better indicator of how difficult your blood sugar is to deal with than just standard deviation.

Thursday, May 10, 2012

Pharma Worries

Over the weekend, I read a book by Alison Bass called Side Effects: A Prosecutor, a Whistleblower, and a Bestselling Antidepressant on Trial. It is about a 2004 lawsuit brought by the New York District Attorney against GlaxoSmithKline, in which it charged that GlaxoSmithKline had criminally withheld information about the use of Paxil for pediatric depression. GlaxoSmithKline settled out of court.

 On a larger scale, the book is about the ways in which medications get sold in the United States. It is a grim picture, at least in the time period (1990-2004) that the book really examines. Paxil in children showed no improvement on any score in tests to see if it helped kids, beyond the degree of improvement shown with placebo. The trial groups in youth also had a four times higher rate of suicide attempts (as well as violent outbursts) with a significant increase in actual suicides. In writing about Paxil, scientist doctors accepted money to write studies in which they falsified data and also used abstracts that did not agree with their data sets. Articles were accepted in peer reviewed journals despite the objections of the peers reviewing the data. Martin Keller stole hundreds of thousands of dollars in public funds as well as received hundreds of thousands of dollars from drug companies, wrote one of those fraudulent papers and has yet to go to jail or be fined or lose his job.

 In the United States, medications do not have to be shown to actually work in order to get FDA approval. They certainly don't have to be better than older medications on the market. And they can be prescribed for things they haven't been studied in correlation with and for populations for which they aren't approved (there are some drugs that are exceptions to this rule). I have read a number of books in the last few years that make me very worried and angry because they demonstrate that drugs taken by millions of Americans pose a clear and present danger to their health- without having any of the claimed effects whatsoever!

I believe I wrote a post last year after reading Mad in America: Bad Science, Bad Medicine, and the Enduring Mistreatment of the Mentally Ill by Robert Whitaker, which makes the shocking claim that mental illnesses diagnosed prior to the "antipsychotic" prescriptions had staggeringly higher rates of remission and that studies of "antipsychotic" medications are done in extremely unethical and unscientific ways, by people paid large sums of money. Schizophrenics in countries where antipsychotics are unavailable or less likely to be used have those same high remission rates observed in the United States a hundred years ago (this really hit home because I had just read a book about the insane asylum established in Illinois at the urging of Dorothea Dix in the mid nineteenth century. It boasted a cure rate of greater than 50%).

 Although I have primarily read about medication misinformation around drugs for psychiatric issues (the mentally ill are in many ways a uniquely vulnerable population), medical misinformation disseminated to further the interests of big pharma does not stop there. Anybody who reads much in the medical journals about new therapies tried for any medical condition will come across abstracts that don't agree with the data in the article. Some of this is innocent. Some of it is not. You will also find many medications or medical treatments whose benefit over another medical treatment is some bizarre endpoint that doesn't seem very important. And usually, it isn't. If you look at a hundred different outcomes, and you consider something statistically significant if the chance that it's a coincidence is less than 1% (by statistical models using normal variation- no I'm not explaining that), you can publish findings this way:
This almost certainly explains why, when a diabetes medication or treatment has no apparent benefit, I read statements like "the treatment group had better preservation of c-peptide" or "insulin antibody levels were lower in the treatment group" or "weight gain was lower in the treatment group" when the data shows, in the three articles respectively, no reduction in the primary end markers which were total insulin dose and A1c, no reduction in the primary end marker which was A1c, and that weight gain was actually only lower in very specific periods of time after starting the trial medication vs control medication- they looked at weight every two weeks and it was higher some weeks and lower other weeks. The three in question are studies on insulin pumps (vs MDI), Humalog (vs Humulin R) and Lantus (vs NPH). That drug companies are making false claims about the efficacy and safety of diabetes medications to doctors and patients is not questionable; it's a certainty.

 The very first one I noticed was right there in the prescribing information for Lantus. I'm sure y'all have heard the Lantus claim to fame; that it gives a relatively flat level of insulin action for 24 hours. Now if I wanted to test how long a medication lasted in a person (or animal), I'd keep testing their levels, say once an hour, until they stopped being measurable. Do you think this is what Sanofi Aventis did? No! (or at least, that's not what they say they did). No, what they did was take a large number of people, give them Lantus, and measure blood levels of Lantus every hour until... 24 hours. The first people in the study to stop having Lantus in their blood stopped having it there between 10 and 11 hours after the shot. The majority of people still had Lantus at a level very similar to the first hours after 24 hours. Guess what that means? It means if you are an average person taking Lantus every 24 hours, you have a time period in which you have two Lantus shots active, causing a significant difference in your Lantus levels. Surprise!

 Although there are dozens of companies making and selling insulin worldwide, only three sell insulin for humans in the United States: Lilly, Sanofi, and Novo Nordisk. All three have interesting stories that come up if you google them with the word "lawsuit", although Sanofi seems to have been sued over sex related charges to do with female pharmaceutical reps, and Novo with not paying overtime.

Having thought a lot about the dangers in medical products, one of my first instincts is to try to figure out which medications a person should be most sceptical of. Here's my list:
 - Medications that have been recently released and that do not exist in generic form. New medications are more likely to have common side effects that haven't been exposed, or that didn't reach statistically significant levels in trials. Don't take a new medication unless you want to be a guinea pig (noble of you) or it stands to make a really big difference in your life.
- Medications for which a clear need does not exist. Yes, treatments for male pattern baldness exist. One of them can also cause male infertility. Medications that have only been shown to affect proxy measures of health should be considered suspect as well. Don't take a medication to lower your cholesterol or blood pressure or blood sugar or heart disease risk if those medications haven't been shown to decrease risk of death (or at least whatever health complications you truly care about- because I'm fairly certain you wouldn't mind high blood pressure if it wasn't for the risk to your eyes and kidneys and heart- so if the medication lowers blood pressure without protecting these things, it's not doing you any good).
 -Medications that were tested and are primarily used in a population that isn't you should be suspect, especially if they are new. Be wary of a drug that is only FDA approved for lowering cholesterol if your doctor wants to prescribe it for your arthritis.
 -If your doctor has samples, beware. That means his primary education about the product almost certainly came from a pharma rep, who almost certainly sugar coated the efficacy of the medication, and not from experience, colleagues, or medical journals. It doesn't mean the product or medication is a bad thing, just somewhat suspect.
 One thing I wonder is whether or not products and medications coming from smaller companies should be given more or less trust. Little companies have less money to influence doctors and the FDA, but they have more invested in each thing they market. If it's not great, they have nothing. I don't know the answer to that one.

Sunday, May 06, 2012

Which Would You Rather?

I have been uploading my Dexcom data once per week, and I had a few weeks in a row with averages in the 140s, not much hypoglycemia, some hyperglycemia. I knew this week was gonna be different. the comparison looked like this:

rangeApr 23-29Apr 30-today
LOW-550%1%
55-702%9%
70-15966%72%
159-24026%17%
240-HIGH6%1%
This week's average is 120; last week's was 145. On an entirely different topic, I am taking a free online course in machine learning, which is a type of programming- the type of programming that would be involved in making an artificial pancreas. It is at coursera. If you have secret dreams of inventing an artificial pancreas yourself, I suggest you take the course.

Explanations and Definitions of Statistical Terms

 When I look at my daily statistics on Dexcom's DM3 charts, here's what it gives me:

% in target: If you took all the of the sensor readings that I had that day, and count the number that were in my target (as given to the computer, not as put in the receiver), divide by the number of readings, and multiply by 100, rounding to the nearest whole number, that's what this is.
Example: If I only had 12 readings for the day (because, say I started a new sensor towards the end of the day) and they were: 153, 159, 164, 163, 160, 151, 149, 146, 141, 138, 137, 136, and my range was set to 80-160, then Dexcom counts the 160 as out of range and the so it counts 9 in range. The total number of data points is 12. So 9/12 = 0.75, times 100 is 75, and no rounding is required, I was 75% in range. THIS NUMBER CHANGES ACCORDING TO THE RANGE THAT WAS SET.

% in low: If you took all the sensor readings that I had that day, count the number that were at or below the bottom number in my target (as given to the computer, not as put in the receiver), divide by the number of readings, and multiply by 100, rounding to the nearest whole number, that's what this is.
Example: If I had the same 12 readings given above, and the same range, I get 0% in low. If I changed my target range to 140-400 (who knows why) it'd count that 3 were at or below 140, 12 was the total. 3/12 = 0.25, times 100 is 25, no rounding required, 25% in low. THIS NUMBER CHANGES ACCORDING TO THE RANGE THAT WAS SET

% in high: Take all the sensor readings I had that day, count the number at or above the upper number in the target (as given to the computer, not as put in the receiver), divide by the number of readings, and multiply by 100, rounding to the nearest whole number, and that's what this is.
Example: Using the numbers in the first example and the range of 80-160, I had three readings at or above the target range, out of twelve readings. 3/12 is 0.25, times 100 is 25, no rounding required, 25% in high. THIS NUMBER CHANGES ACCORDING TO THE RANGE THAT WAS SET

# of  readings: This is the number of sensor readings that were given by dots on the dexcom receiver during the day. A full day of readings would be 288 (except on the days where you changed the time setting on the receiver). For the example above, it would be 12. This is important to tell you how completely the data really reflects your blood sugar for the day. This number is the same no matter what your target range is.

Min Value; This is the lowest sensor reading for the time period. In the example above, the min value was 136. If the lowest reading the Dexcom showed on the screen during the day was LOW, then the value it puts into this chart is 39. This number does not change no matter what your target range is set to.

Average: Take all of the sensor readings for the day and add up their values. If any sensor reading was LOW, use the number 39 instead. If any sensor reading was HIGH, use the number 401 instead. Divide by the number of readings. This number is the same no matter what your target range is set to.
Example: If I only had 12 readings for the day (because, say I started a new sensor towards the end of the day) and they were: 153, 159, 164, 163, 160, 151, 149, 146, 141, 138, 137, 136, the software adds them up- 1797. Then it divides by the total number: 1797/12 = 149.75. Then it rounds to the nearest whole number: 150 is the average.
Another Example: On April 2nd, I used a sensor that didn't function really well. I got 65 readings (other than ??? and out of range). Here they are: 179, 187, 181, 182, 187, 175, 170, 168, 168, 154, 156, 160, 149, 157, 157, 147, 141, 139, 151, 144, 142, 131, 134, 123, 117, 113, 114, 110, 108, 100, 113, 82, 78, 71, 84, 104, 124, 137, 179, 226, 291, 347, 384, HIGH, 399, HIGH, HIGH, 375, 358, 356, 370, 384, 392, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, 380, 96, 104. In computing the average, the software assumes all these HIGHs are 401s (although as an actual fact I was in the 200s during these readings). So, adding these values it gets 14,690. Then it divides by the number of readings. 14690/65 = 226.

Max Value:  This is the highest sensor reading in the time period. If the highest reading was HIGH, then this value will be given as 401 instead. This stays the same if the target range changes.
Example: If the data was: 153, 159, 164, 163, 160, 151, 149, 146, 141, 138, 137, 136, then Max Value is 164.
If the data was 179, 187, 181, 182, 187, 175, 170, 168, 168, 154, 156, 160, 149, 157, 157, 147, 141, 139, 151, 144, 142, 131, 134, 123, 117, 113, 114, 110, 108, 100, 113, 82, 78, 71, 84, 104, 124, 137, 179, 226, 291, 347, 384, HIGH, 399, HIGH, HIGH, 375, 358, 356, 370, 384, 392, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, 380, 96, 104, then Max Value is 401.

Standard Deviation; When I want to compute something very tedious, I compute standard deviations. Here's how to compute a standard deviation. First, find the average (instructions above). Then find the difference between each data point and the average. Square each of the differences, and then add the squares together. Divide by the total number of data points. Take the square root, and that's the standard deviation. The standard deviation goes up a lot if only some points are far away from the average. This stays the same if the target range changes.
Example: If the data was: 153, 159, 164, 163, 160, 151, 149, 146, 141, 138, 137, 136, the average was 150. The differences between each data point and the average are: 3, 9, 14, 13, 10, 1, 1, 4, 12, 13, 14. The squares are 9, 81, 296, 269, 100, 1, 1, 16, 144, 169, 196. The sum of the squares is 1282. Dividing by the number of data points (12) gets us 1282/12 = 106 and 5/6ths. The square root of that is 10.33.... and that rounds to a standard deviation of 10.
Computing the standard deviation for a full set of data would be tricky, but here are some data sets and standard deviations that may be interesting.
If the data points are: 150, 150, 150, 150, the standard deviation is 0. All the of the data points are the same as the average.
If the data points are 110, 150, 150, 190, the standard deviation is 28. The average is 150, and two of the data points have a deviation- a difference from the average- of 40, and the other two have a deviation of 0.
If the data points are 130, 130, 170, 170, the standard deviation is 20. All of the points are 20 away from the average.
Standard deviation is far more meaningful for larger sets of data.

25%: This is the 25th percentile. Multiply the number of readings by 0.25 (or divide by 4- same thing), round up to the nearest whole number, and call our answer N. Now rearrange the readings so they are in order from smallest to largest, and count to the Nth number. That's the 25th percentile (well, not exactly- but that's what the Dexcom software does).
Example: If the data points were 179, 187, 181, 182, 187, 175, 170, 168, 168, 154, 156, 160, 149, 157, 157, 147, 141, 139, 151, 144, 142, 131, 134, 123, 117, 113, 114, 110, 108, 100, 113, 82, 78, 71, 84, 104, 124, 137, 179, 226, 291, 347, 384, HIGH, 399, HIGH, HIGH, 375, 358, 356, 370, 384, 392, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, 380, 96, 104, then I had 65 data points. 65/4= 16.25, which rounded up is 17.
Finding the smallest 17 numbers in the set, I get 71, 78, 82, 84, 96, 100, 104, 104, 108, 110, 113, 113, 114, 117, 123, 124, 131. So 131 is my 25%

Median: This is the 50th percentile. Put the readings in order from smallest to biggest, then pick the middle number. This is really the number that most people have in mind when they say average, even though this is not the average. In real math, if I have an even number of data points, so that there is no middle number, I average the two middle numbers to get my median. Dexcom software simply picks the larger of the two middle numbers, I think.
Example: If this is the data: 153, 159, 164, 163, 160, 151, 149, 146, 141, 138, 137, 136, then there are two middle numbers, 151 and 149. Dexcom software would give the median as 151, I think; the true median is 150. Notice that this is the same as the average, which was 150.
Example: If this is the data: 179, 187, 181, 182, 187, 175, 170, 168, 168, 154, 156, 160, 149, 157, 157, 147, 141, 139, 151, 144, 142, 131, 134, 123, 117, 113, 114, 110, 108, 100, 113, 82, 78, 71, 84, 104, 124, 137, 179, 226, 291, 347, 384, HIGH, 399, HIGH, HIGH, 375, 358, 356, 370, 384, 392, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, 380, 96, 104, the the middle number is 168. Notice that this is much lower than the average, which was 226.
Important To Know: When the middle numbers and bottom numbers are closer together than the middle numbers and top numbers, then the average will be bigger than the median (like in the second example). When the lows are about as far away from the middle numbers as the top numbers are, then the median and average will be very close together. When the low numbers are much lower than the middle numbers but the top numbers are close to the middle numbers, the average will be lower than the median.

When your highs are because of mealtime spikes, expect your median to reflect your fasting blood sugars and your average to be much higher because it will show more of your highs.
Examples:
If your data set is 100, 110, 300, your average is 170 and your median is 110.
If your data set is 100, 110, 120, your average is 110 and your median is 110.
If your data set is 40, 110, 120, your average is 90, and your median is 110.
Median is like your normal blood sugar.

75%: The 75th percentile is the flip side of the 25th%; multiply your number of readings by 0.75, and round up to the nearest whole number.Call your answer N. Put your readings in order from smallest to largest and count up to the Nth number. That's your 75th percentile.
Example: If the data points were 179, 187, 181, 182, 187, 175, 170, 168, 168, 154, 156, 160, 149, 157, 157, 147, 141, 139, 151, 144, 142, 131, 134, 123, 117, 113, 114, 110, 108, 100, 113, 82, 78, 71, 84, 104, 124, 137, 179, 226, 291, 347, 384, HIGH, 399, HIGH, HIGH, 375, 358, 356, 370, 384, 392, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, 380, 96, 104, then I had 65 data points. 65 * 0.75 = 48.75, rounded up is 49.
In order going up my first 49 data points are:
 71, 78, 82, 84, 96, 100, 104, 104, 108, 110, 113, 113, 114, 117, 123, 124, 131, 134, 137, 139, 141, 142, 144, 147, 149, 151, 154, 156, 157, 157, 160, 168, 168, 170, 175, 179, 179, 181, 182, 187, 187, 226, 291, 347, 356, 358, 370, 375, 380. So, 380 is my 75th percentile.

Interquartile Range: This is another way of measuring how much your numbers bounce around, and it's much easier to compute than the standard deviation. All you do is take the 75th percentile and subtract from it the 25th percentile.
Example: If my data points were 179, 187, 181, 182, 187, 175, 170, 168, 168, 154, 156, 160, 149, 157, 157, 147, 141, 139, 151, 144, 142, 131, 134, 123, 117, 113, 114, 110, 108, 100, 113, 82, 78, 71, 84, 104, 124, 137, 179, 226, 291, 347, 384, HIGH, 399, HIGH, HIGH, 375, 358, 356, 370, 384, 392, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, HIGH, 380, 96, 104, then my 25th percentile was 131 and my 75th percentile was 380. 380- 131 = 249. 249 is my interquartile range.

Estimated Standard Deviation:  Sorry, I don't know what Dexcom is doing with this. 

 Standard Error of the Mean:  This is something I did learn how to calculate in my statistics course, but it is not meaningful with regards to Dexcom (or Guardian) data and really should not be included. Basically what this number is meant to tell you is if you had picked numbers from a random distribution, how closely can you be sure your data represents the real numbers. In other words, did you check your blood sugar often enough to guess at the average blood sugar? How far off is your guess likely to be? However, this really doesn't answer the question because our data points are not random and because the sensor errors are much bigger than 0.

Coefficient of Variation: This is an easy computation. Take the standard deviation and divide by the average. Multiply by 100 and put a percentage sign after it.
Example: 153, 159, 164, 163, 160, 151, 149, 146, 141, 138, 137, 136 is my data set, so 149.75 is my average. My standard deviation is about 10.33. 10.33/149.75 = 0.0690, 6.9%, a very small coefficient of variation.

Tuesday, May 01, 2012

Whoa! An Advantage to Using NPH

I was looking for data on how high an A1c has to be for a person to be very likely to be going into DKA, and came across something entirely different.

This study analyzes the risk of DKA in youth with diabetes (age two to nineteen years) in 2002-2008 taking three or more shots per day, and compares the risk and other characteristics of DKA in those who used Lantus, Levemir, or NPH, had been using the same insulins for at least a year and a half.
To cut to the chase, those using Lantus and Levemir went into DKA way more often; an average of once per 15 years vs an average of once per 28 years for those on NPH. The authors theorize that this is because missing a shot of Lantus or Levemir given once per day was riskier than missing a shot of NPH, which is given twice per day. I'm not entirely convinced. Read the Article
Unfortunately, this study strongly suggests that from the point of view of cost effectiveness, Lantus and Levemir are no good whatsoever. DKA is very expensive. And Lantus and Levemir cost more than NPH. The only advantages shown so far for using Lantus is reduction of hypoglycemia- and hypoglycemia doesn't tend to be very expensive to treat. Levemir given twice per day shows better A1cs than NPH given twice per day, by an average of 0.05 A1c points- statistically significant but not otherwise.

I am going to look for similar studies on adults and rates of DKA in NPH, Lantus, and Levemir users. Maybe I should switch to NPH.