6 Ekim 2015 Salı

Can you use the SMART digest kits for proteomics?


Okay, y'all know I've been trying to find a good, easy, and reproducible protein digestion method to get behind. And I've mentioned the SMART (previously Perfinity FLASH) digests kits before. The big question that is floating around is: it works for single proteins just fine, but how well does it work for proteomics?

According to this paper it works pretty darned well. Now, this is just one paper and all, but I really can't come up with a reason that a method that digests one protein wouldn't digest a whole bunch of 'em and do it well (so long as the protein to enzyme ratios aren't all wonky).

Sure, its N=1, but it sure looks like it works!

5 Ekim 2015 Pazartesi

First impressions of the free LFQ node from OpenMS


Over the weekend I got to finally toy around with one of the cool free nodes from the Kohlbacher lab that we can install into Proteome Discoverer 2.0.  The LFQ is short for "Label Free Quan" and the nodes are freely available to download for anybody here.   Now, before I go forward I should probably reiterate something that is on that page. These are 2nd party nodes and these won't be supported by Thermo's Proteome Discoverer team. Questions should be directed to the node developers. Fortunately, they seem quite straight forward!

Here are some early impressions of the nodes.

1) They are easy to install. Download the file from SourceForge, make sure all Proteome Discoverer versions on the PC are closed and run the file.  When you reopen, the nodes are there!

2) The node developer's even have workflows ready for us! There is a Processing Workflow and Consensus Workflow. Which is great! Cause, honestly, I wouldn't have thought to set them up as above....

3) Interesting note.  SequestHT and Percolator are mandatory. Gotta have 'em or you won't go anywhere, it seems.

4) LFQProfiler appears to multithread.


Windows performance loggers are always kind of hard to interpret, but all 8 cores on this desktop appeared to be doing something when the LFQ Profiler kicked in.  In the consensus you can actually tell that LFQ node how many cores it is allowed to use!  On other runs, it looked like I was maybe only using 4 cores, but this really isn't a good measurement.

5) Disclaimer here: I've got like 10 versions of Proteome and Compound Discoverer on my desktop because I have been alpha/beta testing them for years. I've got some versions that are locked down for different projects so my working environment is probably sub-ideal. But... I'm gonna be honest here, and I'm likely doing something wrong, but I'm finding the node a little difficult to integrate into my workflows, in an odd way. I keep getting "Execution failed" in my Administration tabs, but the failed workflow can be opened and looks just fine. I do have to unhide my Intensity but the numbers are here and it looks like it ran real fast!

So...first impressions. The LFQ node installs easy, has convenient pre-made workflows (additional downloads required) and seems to run fast. More analysis required to see how it works, but its Sunday and this is all the PD I think I'll do today!

UPDATE: 12/2/15.  Downloaded the LFQ nodes and they are AWESOME!!!


66- Surprise Eggs ! Kinder Surprise EggS Cars 2 Barbie Mickey Mouse Sur...

4 Ekim 2015 Pazar

More statistical analysis of PSM FDR from the Qu lab!


We have an awful lot of search engines these days and we have almost as many (more?) ways of working out our automatic false discovery rates.  The Qu lab seems to have stepped back and said, let's try to sort it out, meaning, which FDR is more appropriate for large datasets -- and when?

This is a heavy analysis of three different search engines available for running in or through Proteome Discoverer as well as an analysis of what false discovery rate algorithm/method or filter will leave you with the best possible results.  Interestingly, the answers appear to be very analyzer and fragmentation-type dependent.

I'll leave this here for you guys who took more maths!  The answer appears to be...there is no easy answer...these are things we're definitely need to spend more time working on as proteomics moves further and further into the BIG DATA world.

You can find the abstract for this (paywall) paper here.

3 Ekim 2015 Cumartesi

Histone post-translational modifications in monocyte-derived macrophages

(Picture credit: Laxmi Iyer, original url [unrelated to this article] here.)

It turns out that most of the histone work that has been done out there has been done on mouse cell lines or immortalized human lines. While this is undoubtedly useful information, immortalized cell lines tend to be kind of messed up and we all know about the plus/minuses of studying mice.

The Ciborowski lab has a plan of long term, in-depth studying of histones and their post-translational modifications of normal human macrophages. In this first study (available here, open access) they work on establishing their normal, baseline conditions for resting macrophages. Once they get that, they can go on to further studies.

For this analysis they are primarily using an LTQ-Orbitrap XL with ETD and employing both CID and ETD fractionation. Surprisingly, the majority of the information being obtained for PTM matches is not coming from the ETD. This is likely due to the lower speed/efficiency of the earliest ETD system compared to the ones I normally get to mess around with. It does, however, contribute meaningfully to the study. This is a nice clear study but I mostly highlight it here because I'm very interested in what they are going to do next AND how this data is going to line up with other well established histone PTM datasets we have from other models.  So...this post is kind of to remind myself to check back on these guys later...sorry...

2 Ekim 2015 Cuma

EuPA 2016 --- Turkey?


The theme of the conference is "Challenge Accepted. Standardization and Interpretation of Proteomics." Ummm.....YES!!!!  And the lineup of speakers is already AWESOME!  You can check it out here.