1. Hey, Guest,

    Do you think you're halfway handy at making logo? If so, we want to hear from you. Please take a look at this thread to consider taking part in a design contest for our affiliated businesses.

    -The Directors

    Dismiss Notice

Gene Therapy Treatment Passed Clinical Trials, Priced at $1 Million

Discussion in 'Current Affairs' started by zenosia, Jan 6, 2017.

  1. Heh, i like it, Elves are a pest, spread disease, only do shit around, and need to hunted to extinction.
     
  2. Fell

    Fell A Work of Meepmorp

    Careful you don't cut yourself on that edge.
     
    • Like Like x 4
  3. ...Fell telling someone off for edginess.

    *looks to the west for the sun to rise*
     
    • Funny Funny x 13
  4. Why is this suprising? Is there law that says trolls must be edgy?
     
  5. I think he's more on:
    "the pot calling the kettle black"
     
  6. In retrospect my follow-on post had a way harsher tone than I'd meant it to carry, so my bad for jumping down your throat there. I suppose my own perspective on modern surrogacy has been shaped by it a) being performed by incredibly rich San Fransiscan types, so no one's hurting for cash b) being performed in the US, so oodles of legal protections for the surrogates themselves, which meant that c) I'm sitting here feeling grumpy that anyone might be worrying about the welfare of someone who got paid more in a year than I'll make in my entire life.* :tongue:



    *I got curious and asked, and yeah while I didn't hear the exact amount that person in question paid their surrogate, the ballpark numbers were enough to buy a nice beachside house.

    I'm sorry, but

    The idea of "programmable intelligence" is one that sounds nice on paper: insert proper DNA, get gud thinkin' out, yay win right? I'm gonna go out on a limb and say that you're a software engineer since you call yourself a "software engineer Hoosier," [:V] so I can see why you'd think of genetic engineering in this manner. Problem is, we have no idea WTF we're doing. This is not to say that we can't try and trace cause-and-effect via observation and direct experimentation, but that both the causes and effects are extremely complicated, and controlling for the many factors involved in such a complicated and subtle issue as "intelligence" is unbelievably complicated. To try and give you a taste of the difficulties involved:


    Let's start with one of the basic questions of "coding" in genetic engineering, namely genes vs. gene expression. On paper, genetic engineering sounds easy-peasy: code the new gene, put into human, yay done right? Weeeeeeellll, the real question is how that strand of nucleotides can actually go from being A-T-C-G-etc. into something useful - by and large, a new version of a protein. That gene actually has to be 'expressed,' and the process (and potential issues) relating to that a really friggin' big kettle of fish.

    Your genes are the underlying code which provides direction for your cells, but proteins are the movers and shakers which affect things directly, which means that we've logically focused the bulk of research on coding sequences of DNA which actually spell out the sequence of amino acids in some protein or another. Problem is, that's only scratching the surface of the issue here: just to give you an idea of the scope of the issue, most (80-90%) of your DNA is made up of noncoding sequences, namely DNA which doesn't directly code for a protein.

    Noncoding DNA sequences have been popularly termed "junk DNA," which is all kinds of stupid. To use a metaphor, I'd imagine it's a lot like computer code: I couldn't tell HTML from C++, but I feel pretty confident saying that this same web page we're both accessing has quite a few background processes running which carry out vital functions (encryption and security, for instance) which could likely be removed without immediately affecting the actual picture we see. In a similar fashion, much of what pop-science has termed "junk" has all kinds of functions which we simply don't know (enough) about. Most DNA sequences contain "introns," for instance, which is the catch-all term for sequences which are removed during the transcription of the nucleic acids of DNA (A-T-C-G) to make a sequence of mRNA, which is translated into the amino acids (lysine, glycine, tyrosine, etc.) of a functional protein. We used to think they were evolutionary fossils, until someone theorized that they were serving as protection for the genome, and until someone else theorized that they were actually part of a different gene, and someone else theorized that they might be controlling gene expression! They're all (sorta) right.

    So in other words, if we want a relatively simple fix of simply revising Protein A to make it a slightly different and more effective structure, then we have to check all the 'downstream' effects that could either block our desired effect or have unanticipated side effects. It's not safe to just "plug-and-play" with the one gene, because oftentimes that one sequence of code might well be coding for different proteins at different times, or the same one in a slightly different configuration. And of course, even assuming that we can change the gene itself safely, we also have to account for the protein's shape and interactions: proteins have a specific 'code' of amino acids, yes, but that's just their primary structure. The different amino acids of proteins naturally stabilize themselves around generally either alpha-helices, beta-pleats, or a couple other more exotic forms (the protein's secondary structure), before folding into a complex shape known as its tertiary structure (and some proteins go on to interact with other proteins to form an even more-complex quaternary structure, yay ain't proteomics cool?). Some prions, infectious agents made up entirely of protein material, are believed to arise from regular proteins being misfolded in a manner that we still can't understand, while patients with Alzheimer's are characterized by two misfolded proteins which have clear effects on the progression of the disease.

    And then we get into other methods of regulating protein expression, i.e. the natural defenses like chaperonins which keep us all from being swarmed in malignant, malformed proteins, or the "packaging" effect that other cellular structures in regions like the Golgi apparatus or the endoplasmic reticulum have on certain proteins before they end up in a position to carry out their desired function. I haven't even brought up the effects of microRNA and RNA in general, cis-trans regulatory bodies, repeat sequences or weird-ass things like transposons (which are segments of DNA that just up and move themselves around inside your genome, because appparently shit wasn't already complicated enough lol). And of course, environmental factors like the presence or absence of key molecules (sometimes by design, sometimes unanticipated) can also play a role even if the machinery's all in place to act. Both positive and negative effects can occur aaaaaalll the way up and down this chain of events, with corresponding downstream results, and tracing them back to a single isolated cause is a ridiculously complicated enterprise.



    Hell, even just with DNA expression alone there are a lot of other hidden factors in play. Let's start with the classic of DNA methylation and acetylation, where a particular enzyme comes along and tags a methyl or acetyl group onto a particular coil of DNA. An attached methyl group will tie the DNA strand closer together, while an acetyl group will spread it apart, and we know that a more tightly-coiled strand of DNA will be expressed at a lower rate than a less-coiled one. DNA methylation at least has been observed to be heritable across human generations, i.e. whatever DNA is wound tight in your parents is statistically more likely to be the same way in yours. Factors like this are known as "epigenetic" ones, since they aren't directly related to the A-T-C-G code of genes themselves, but have clear effects on gene expression. In other words, it's not enough just to write in desired genes. In fact, sometimes the underlying problem might have nothing to do with either the genes themselves or external environmental factors, but might lie in the high or low rate of gene expression.

    "Now hold on, that could be great!" you might be thinking. "Couldn't we co-opt this and do the same thing, changing people's gene expression in the way we want?" And yes, the study of epigenetics does offer the option for us to change people's gene expression both in utero and in vivo, which is obviously a powerful tool. The problem is, what exactly do we want and how do we go about getting it? If this shit was more cut-and-dried then it'd be great news, but as it is the whole issue of epigenetic factors is just one other way that we can reach bad outcomes just by accident.

    Let's say that we have a particular gene which we want to be more strongly expressed; for simplicity's sake I'll say it's something pretty much universally beneficial like one of the more minor DNA repair enzymes. So we whip up a low-effort adenoviral vector, and add in a bunch of enzymes coded to attach methyl groups to any known inhibitory factors (sections of DNA which act to stop or slow the expression rate of the desired gene in some way), along with enzymes to attach acetyl groups to the desired gene. Problem is, what else are we turning on or off? Remember, your engineered enzyme has a particular sequence of DNA (i.e. something like 5'-T-C-T-T-A-A-G-A-C-3') which it'll trigger for and attach to, which is how you're able to make it selective in the first place. With only four base pairs to work with, though, it's hard to avoid any sequence which isn't at least very similar to your desired one without making the enzyme too large to function. So on the one hand, you have to worry about the effects of opening up or closing off the one DNA segment that you've just affected, because different segments often have different effects and code for different shit at different times in different ways. On the other hand, you've also got to worry about the other sequences you might've affected by accident, as your enzyme twigs to the wrong sequence and accidently clamps down on some cells' ability to produce vital cellular machinery. Also, since our only functional method of in vivo delivery is to co-opt a viral vector and hope it gets to where we need it to go, it's a complete toss-up as to what the actual results are. IVF is at least simpler in delivery, but then you've got the problem that you don't have an actual functional human being to work with, and have to hope that you haven't just accidently disrupted the vast and complicated method by which two microscopic zygotes turn into a walking, talking macroscale organism which can take a dump in your backyard and blame it on the dog.



    And of course, real-life humans don't exist in a test tube. Like any other organism, we're strongly shaped by our interactions with our ambient environment, from fertilization all the way through senescence, and the effects of our genes are difficult to separate out from the effects of a supportive or harmful environment. Now, while I lean towards "nurture" on the age-old "nature vs. nurture" argument (I majored in social science, deal widdit), experiments like twin studies and on adoptees make it pretty clear that nature has a pretty big say in things. Problem is, which part is having which particular effect? When we see an undesirable result like schizophrenia or something similarly complicated, tracing it back to the source is enormously complicated. Was it the sequence of some coding DNA, or was it the microRNA which controlled how that DNA was turned into a protein? Was it the particular stable shape that finished polypeptide took, or was the problem instead with how that protein reacted to environmental stressors? Most likely, it was all of the above, which means that when trying to "fix" things, we have to consider desirable vs. undesirable effects all the way upstream (example). It's a complicated enough mess even when we're trying to solve something like Huntington's disease or CF, which are diseases which can be generally isolated back to a single sequence of DNA. But trying to improve "intelligence?" That's a couple orders of magnitude more complicated, man.


    We tend to think of problems and their resolution in a linear sense, of problem=>fix=>solution. The best description I've heard for complicated-ass issues like this is of "problem alchemy:" we aren't so much solving problems as transmuting them, into different problems that we can deal with more easily. To try and make another computer metaphor, think of this as like computer overclocking, where you're trying to squeeze more and better performance out of a single set of equipment. You can increase the amount of activity the CPU is carrying out fairly easily, but that carries a lot of knock-on effects which you have to control and deal with: increased heat buildup and power usage, for starters. You can control for those factors in turn with things like increasing fan speed and simply increasing the power uptake, but those carry their own issues like lifespan and kicking up more dust, which also have to be dealt with. Yes, we can achieve desired results, but those results tend to bring costs of their own which must also be addressed.


    Let's consider intelligence by looking at a group which is, statistically speaking, smarter than you or I: da Jooooooooos. Individuals from the genetically-isolated Ashkenazi Jewish ethnicity have shown IQ scores approximately 1/2-1 standard deviation above the mean in comparable socioeconomic backgrounds: in other words, specifically Ashkenazi Jews scored measurably higher on measures of intelligence (excluding spatial reasoning, where they scored significantly lower) when measured against individuals of the same society, wealth class, and age. That Cochran paper I quoted on the subject is a controversial one, so don't take this at 100% face value, but its hypothesis has withstood enough brutal peer review that I think it's got a point regarding the observed genetic factors. Individuals who had homozygous or heterozygous dominant genes in several particular areas (remember, you have one set of chromosomes from your mom and one from your dad, so take all those genetic complications I was mentioning above and remember to multiply by two lol) appear to have higher concentrations of certain important factors affecting the production or storage of equally important things like sphingolipids (brain-related lipids that are named after the Sphinx because they're enigmatic as fuck). In one 1970 study it cited of individuals with a unique malformation of the Torsin A protein, when surveyed at age 5 (i.e. before another fifteen years of maturation and environmental factors could've muddied things further), the mean IQ was 121 compared to the same population of age, school district, and so on.

    The individuals had to be surveyed pretty early on, though, because shortly afterwards they all died before age 15 from torsion dystonia. It's a bit like Bean from the Ender's Game series, only instead of people's brains growing too much, the side effect of their brain having so much good shit to work with that either it or another organ in their body falling apart and dies under the buildup of harmful byproducts. The Cochran paper notes similar genetic disorders affecting similar metabolic pathways, namely Tay-Sachs (total brain death by age 4), Gaucher's disease (anemia and severe hepatomegaly), Niemann-Pick disease (brain death by toddlerhood) and congenital adrenal hyperplasia (CAH). The paper's hypothesis is that the less-nasty version of these disorders - usually the heterozygous-dominant one, but in some cases like with torsion dystonia we don't know exactly what causes the progression - carries significant benefits to IQ, and that the unique environment and background of Ashkenazi Jews in Europe provided a centuries-long "incubator" to produce beneficial effects. The paper remains hotly debated across its different points, but even if its proposed causal mechanism (fitness selection pressure vs. founder's effect) hasn't stood up to scrutiny in every cited case, its overall thrust is still sound AFAIK. When it comes to improving neural function, there is absolutely 'too much of a good thing.'

    The paper is telling a story about intelligence, and the moral of that story is "there is no free lunch." We can improve on natural selection, of course, but it's important to point out that the metaphorical 'therapeutic range' - the desired gap between 'have no effect' on the one end and 'have a bad effect' on the high side - is very narrow and constantly subject to change. Even minor modifications are not something to take lightly, because we're playing with factors where we still have no understanding how they work. I'm not saying "don't play God" or anything like this old saw, because Huntington's is fucking horrible and I want that shit gone yesterday. But I want people to understand that this process is difficult and dangerous, and that we're gonna kill a lot of volunteers and test-tube babies in the process of finding out the difference between "safe, has desired effect" and "oops turns out your kid's gonna die before he hits puberty."

    If you want to see beneficial changes tomorrow, we've already got genetic screening and alcohol-cessation programs today. Go lobby ur Congresscritter to provide low-cost genetic screening* for anyone looking to pop a bun in the oven, and if shit like that ever went through then we'd have done more to smarten up the next generation than the next twenty years of cutting-edge research will have.




    *and genetic counseling, since this shit is hard to intellectually understand even without the emotional angle to boot.


    EDIT: On second thought, I wonder also how much the specter of Vioxx or thalidomide influenced UniQure's decision to aim for such a low-return gene therapy in the first place. Sure, it was mechanically much easier than going for something with more mass-market appeal, but they've had to make such a ridiculous price to get any chance of making back their cost. How much might their decision been determined by the fear of an initial success and mass-marketing getting undermined by some later whoopsie on the scale of "so it turns out this'll cause a fatal cytokine storm in 1% of the people who take it, lol sorry guys? Hey, what's with the pitchforks?" This is all speculation, mind you, but I'd be interested in knowing if this was a 'trial run' in more ways than one.
     
    Last edited: Jan 11, 2017 at 7:08 PM
    • Informative Informative x 24
    • Like Like x 9
    • Insightful Insightful x 4
    • Hugs Hugs x 1
  7. Hotdog Vendor

    Hotdog Vendor Yo momma is fanon

    Location:
    Down Under
    @Nuts! I need a way to rate your post Like, Hug, and Insightful at the same time. Instead I'll just say "thanks" and "that's a deep and well thought out post you have there, I hope lots of people read the whole thing".
    Bonus points for pointing out the stupidity of the "junk DNA" label. Too many people still think it's actual junk.
     
    • Like Like x 5
  8. When i read his post i felt "humbled" so i opted to stay silent, because when we see something so well thought, we shut up and listen.
     
  9. Volt Cruelerz

    Volt Cruelerz Software Engineer Hoosier in Florida

    Location:
    Florida
    You seemed to assume that I was not familiar with the mechanisms by which genes are expressed, but the only things you mentioned that were new to me were the Ashkenazi Jews, sphingolipids, and the following paragraph wherein you discussed the age of death of various illnesses, numbers I was not aware were so low. I don't disagree with you because I'm not aware of the difficulties, though perhaps I am more optimistic. That said, I'll have to look into sphingolipids sometime. They sound really interesting.

    I'm of the opinion that genetic screening, birth control, and abortion should be paid for by the government, and while I disagree with 20 years of genetics research being of lesser value than those ideas being implemented, I suppose I'm thinking more long-term. If, for some reason, genetic engineering hits a brick wall, then they gain comparable value, but I don't see that happening.

    I was talking about what we would be capable of, not what we would do. There is a significant difference between the two.

    I suspect that heterozygous boons that turn into banes when you're homozygous are going to be strongly avoided in the first generation of designer humans. They might reappear in later generations, but ultimately, I think, the first genetic engineers will steer clear of them because they tend to complicate matters, not to mention what happens if your engineered human decides to have a child with another engineered human the old-fashioned way (or is raped, etc)?

    Last little mini-point I'd like to make is that I disagree with there being no free lunch. I assert there are free lunches, but most are small. The trick with making a designer baby is that you have to pile up a lot of them.

    Now that we've got that out of the way, let's talk about the bulk of your post, which really, for the most part, didn't mean much for what I was talking about. Most of your arguments apply to synthetic genes, not swapping out a bad allele (or merely mediocre one) for a good naturally-occuring one, unless I am dramatically overestimating our ability to link phenotypes to genotypes, which, I confess, could be the case, but I doubt it (otherwise I wouldn't hold the stance that I do). Things like worrying about protein foldings are largely irrelevant if you're drawing the alleles you're inserting from people that already exist and are healthy. Yes, if you were to just stick some synthetic sequence somewhere that you weren't intimately familiar with (which basically means everywhere right now since higher-level protein folding is a pain in the ass to simulate, hence Foldit's existence), but "plug-and-play" does rather apply when you're talking about alleles that already exist and exist independently, not being part of the spaghetti code chunks of our genome.

    Of course, things do get more complicated when you start talking about variable expression rates, but again, I would argue that it doesn't really matter for most first-generation changes. I think of the system that determines genetic expression as a black box that takes in genotype and outputs phenotype. It is hopelessly complicated, and understanding it would be a nightmare, but if you have access to tens of thousands of human genetic sequences and you know what traits are expressed by each person, you can do correlational studies or feed it into a neural network. In either case, you'll get allele A yields trait B with a given confidence. The former will be better at nailing down absolutes and the simpler changes, while the latter will be predictive (though not always correct) about more complex interactions. The wonder of correlational studies and neural networks is you don't have to understand the black box. All you have to do is feed the black box the input that will result in your desired output, so you take your simple changes (single site) you discovered from correlations, and your high-priority complex changes (two or three sites; I'm not going to imagine we'll be able to get much better than that in the next 20 years) and feed that into the neural network to make sure you didn't break something else. Thus, it doesn't matter if the gene you're altering actually codes for the protein that actually goes and does the thing you want, or if it actually just goes and alters the expression of gene B which alters the expression of gene C which alters the expression of gene D which actually does what you want. At the end of the day, all you care about is what your change does, not how it does it. Now, granted, a neural network powerful enough to emulate something as large and complex as the human genome would have to be phenomenally large, necessitating it be on a super-computer and training it might well take years, but once you have it, it will be invaluable. Honestly, I suspect we'll see these things start to appear first because of insurance companies trying to reduce risk; the fact that you could use them to make designer babies will just be gravvy.

    To sanity check your nucleotide sequence, you can then do some experiments in live cells to make sure you didn't accidentally make them make too much of a certain protein or something. Since this is humans we're talking about, we're probably going to go all the way and try to get them to differentiate into every single type of tissue in a lab and make sure they all behave as expected before we go sticking cells with their genes in someone's uterus. In software, mission-critical code gets the shit tested out of it via multiple levels of testing from unit-testing on up. I cannot imagine that the real world of engineered human genomes will be any different (excepting of course that integration testing of a human genome would be growing a full human being in a lab and having them live out their lifespan there, so obviously that level won't happen because ethics). The procedure to establish a genome is safe for development into a human being will be long and arduous and is part of the reason I suggested it costing tens of millions of dollars in the first generation. When I say the human race's capacity for designing genomes will be very high, I mean it. I also believe it will be (and should be) bogged down by testing procedures since this isn't something you can really afford to fuck up. Imagine the lawsuits if you do.

    Now, will you be able to hash out everything before you stick a designer genome in a human zygote that will be carried to term? Of course not, but all you have to actually do is get the error rate below the natural rate of genetic birth defects. It's like autonomous cars. They don't have to be perfect to start widespread adoption, they just have to be better than the old fashioned way.

    Perhaps I'm being too optimistic, but the way I see things headed and the increased potential for predicting outcomes with neural networks, I very much see the capacity for very intelligent, very healthy, very attractive designer babies in the near future. I certainly don't think I'm pulling xkcd 793.

    EDIT: in light of this discussion, I got to talking to an old friend of mine who's worked in genome sequencing. It seems I was overly optimistic about our present ability to nail down the actual genetic causes of things simply because the interactions are usually far too interconnected. I still believe neural networks trained on human genomes and designed for the insurance industry will eventually be co-opted for the creation of designer humans though.

    Edit of my own since the whole designer baby thing is rather much of a tangent lol.

    I imagine you're right on that front. Even if several of the patients die, the damage will be contained, as opposed to something much larger-scale that would nuke the company from orbit.
     
    Last edited: Jan 11, 2017 at 11:23 PM
  10. Correction, you have no idea if there is going to be a brick wall to hit and you don't want to hit that brick wall. So you end up with the certainty that that is not happening. Such willful ignorance is jarring.

    Continuing from above, You are talking about what we could be capable of, but present it both to yourself and to us as what we would be capable of.

    Such an assertion requires citation.
     
    • Like Like x 1
  11. Volt Cruelerz

    Volt Cruelerz Software Engineer Hoosier in Florida

    Location:
    Florida
    I do not know if we will hit a brick wall or not. I was saying at the time of writing that I had no particular reason I could think of for it. It wasn't willful ignorance, but it was acknowledgement that obviously I don't know the future.

    If that is how it came across, I presented things in error. I apologise. It seems I need to work harder on avoiding hyperbole.

    No, it doesn't require a citation because that's not how I was using it, though I'm sorry if this was unclear, though if I'd previously looked for one, we'd have avoided all this mess. I was thinking of it as "this is something I believe but cannot verify at this time. If it is wrong, my entire argument falls apart." As per the edit to my post, I have come to believe that assertion is false. Thus my argument fell apart, leaving very little left. To quote the video response, I was "wrong, wrong, wrong, wrong, wrong, wrong, wrong, wrong."

    Now, can we focus discussion on the edit to Nuts's post where they attempt to return discussion to the main topic of the thread?
     
    Last edited: Jan 12, 2017 at 7:41 AM
  12. ZubZub

    ZubZub

    Location:
    Singapore
    Guys we need to get back to earth. CRISPIR sucks big time for human genetic modification or modifying any organism bigger than a fly. It's got a huge failure rate and while it's quite useful for dealing with single cell organism it's not going to let us genetically modify humans anytime soon. The technology has a huge failure rate and we are extremely far from being able to genetically modify intelligence. We don't even know how the brain works, How the fuck are we going to genticly modify that to make it better ?

    The main risk with designer babies is scammers using them to trick rich parents into forking money to them then doing some hax job that's going to fail and run away with the money. Not creating a permanent upper and lower class*



    *rich students already perform better than poor and middle-class students to a huge degree. This technology is only going to be marginal improvement over that
     
  13. Fell

    Fell A Work of Meepmorp

    Crispr-Cas9 can totally be used to modify humans, all you have to do is do it dozens of times until it works right, culture the cells that it worked on, coat them onto a cell free medium such as a pig or cadaver organ and transplant them into your body.

    That's how I'm gonna do it. Like a victorian era mad scientist.
     
    • Funny Funny x 1
  14. Volt Cruelerz

    Volt Cruelerz Software Engineer Hoosier in Florida

    Location:
    Florida
    Default CRISPR+Cas9 sucks for larger organisms, but newer versions are a lot better, with SpCas9-HF1 able to make edits to the streptococcus genome (4.6 million base pairs) without making a single error in most cases. Further alterations to SpCas9 removed the many of these corner cases. Granted, not messing up in 1 in 4.6 million is a far cry from not ever messing up once in the 3 billion humans have, but if you're using IVG to farm embryos, this doesn't really matter since you can try ad infinitum.

    As for not knowing how the brain works, that's largely irrelevant. If you can establish certain alleles cause higher intelligence, you don't really need to know how they work. Obviously you need to figure out which genes are responsible in the first place which is no small task, but once you do, you can apply them relatively easily with the aforementioned technology.
     
Snowfire Internal Ad System Quest