I know there’s a fine line between a parent being impressed by their child and a parent bragging. Nonetheless, I thought this drawing Scarlett did was pretty good for a 3 year old.
I know there’s a fine line between a parent being impressed by their child and a parent bragging. Nonetheless, I thought this drawing Scarlett did was pretty good for a 3 year old.
When I was in my senior year at Cornell, my adviser tried to get me to enroll into graduate school. My dad had advised me to wait and see what it turned out I wanted to specialize in. Also, I’d likely be able to get work to pay for my degree. My adviser told me I’d never end up getting a graduate degree. Those who don’t do it right away end up procrastinating forever and don’t get one. I knew I’d work at getting one, so I didn’t pay him any mind. I went to work and work did have a program by which they paid for college classes. But I wasn’t sure where I wanted to go. Almost everyone went to Johns Hopkins Engineering because they didn’t require the GRE. But something inside just didn’t feel right. So I waited.
And I came to feel that my interests and talents lay with the field of Systems Engineering. In a sentence, I’d say that Systems Engineering is the Engineer’s MBA. It’s about taking an engineering point of view to project planning and management. Eventually I found the Systems Engineering program at Stevens Institute of Technology. As a bonus, they offered classes online so I’d be able to attend without having to take a sabbatical to New Jersey. The classes were quite helpful as I moved into an engineering management role at work. And the professors were really great, bringing real world experience to the classroom. Many of them were consultants to Fortune 500 Companies or the US Government.
After a few years of taking classes part time and writing up my special project paper (which I’ll upload to the blog soon), I finally graduated a few days ago. It was a very different feeling than graduating from Cornell for a few reasons. First of all, the challenges were very different. At Cornell I was learning how to learn. At Stevens I was learning how to apply my work life to school to learn lessons to take back to work. And I had a very different set of time management issues; this time juggling a full time job, a wife, and a daughter along with my class load. When I was done with Cornell I was about to start my life. Finishing my graduate degree at Stevens was a milestone on a life already begun. Differences aside, I’m definitely glad I went to my graduation because it cemented the sense of accomplishment of the past few years.
Below is a gallery of photos I took at the event followed by some video.
Entering the ceremony:
Conferral of Degrees:
The audience decided to do one strong clap after each name instead of a bunch of regular claps:
Confetti after the ceremony:
Finally, a gallery of photos I had my mom take with my Rebel XTi of Scarlett and I:
I found one way around the situation involving a DNG going to RawTherapee and creating a JPEG image that’s missing the title and tags when read by Digikam’s Exiv2 library. It may not be perfect, or even the best way. But it’s one way around the issue that I was easily able to confirm with about 5 minutes of messing around today. First up you want to tell Digikam to make XMP files to go along with all files:
Yeah, it’s messy in that it creates XMPs for JPEGs and everything, but it’s probably got some side benefits like having XMP files that work regardless of TIFF version and so on. Then add your metadata to a file:
you’ll see that it creates an XMP file:
Then send it to RawTherapee for processing. You’ll get this in the end:
So if you look in Digikam, no title or tags.
At this point, based only on the examples on Exiv2’s webpage, I need to change the XMP file to have the same name as the JPEG:
Then I run this command:
Ideally, I’d like to write a script to do this so I don’t have to change the XMP filename – because if I make a few different versions of the file, I don’t want to have to keep changing the XMP. Still, there’s a solution that I know works well enough.
Issues with tags and titles aside, I am really liking RawTherapee so far as my Lightroom RAW processing replacement. I wanted to document my process for getting to a black and white photo that I like both as a tutorial of sorts, but also to document for myself how it works with RawTherapee.
I’ve activated here one of my favorite features RawTherapee has that Lightroom does not, two windows showing just a small region up close. Too often I’m stuck zooming in and out of an image to check various parts of the image as I make changes. It’s not as crucial with this image, but I just wanted to test out the feature.
So let’s start off on the Exposure Panel and look at the Exposure Compensation slider. I make a slight adjustment to the midtones. I’m going to skip adjusting the white balance, because I like how the camera did it. It seems pretty similar (to my eyes) to the real thing. Now I’m going to take a look at the contrast and saturation.
Overall, I like to get a picture I would like as a color picture before I go black and white. I’ve pushed the basil leaves a BIT too far on the saturation, but I think that’s probably going to work out good for me in black and white. There’s a bit of a highlight issue on the orange pot. I’m going to try the highlight compression slider to see if that can help with it a bit.
It does help a bit and I don’t want to overdo it as it has consequences on the whole image. So now let’s move on to the Black and White tool. I played around a bit and got this:
It may not be an awesome black and white image – I probably wouldn’t have chosen this image to convert. Optimizing for the pots, as I have, causes the leaves to somewhat blur together) But it has all the tonal qualities of a black and white image that I like and it only took me about 25 minutes including learning what the sliders do and writing up this blog post. RawTherapee gives a few ways to convert to black and white. The default was “desaturation”. Ever since I learned how to do this with old Photoshop 7, I never liked that method of black and white photos. It may suit some, but it never quite had the tonalities I liked. The method that worked best for me in RawTherapee was Chanel Mixer. This is the way I used to do it in Photoshop before Lightroom had their own weird way of doing it. I actually prefer doing it this way as it makes more sense to me. Given the composition of pots and the greens in the leaves, I found the best filter for me in this specific circumstance was a Blue-Green color filter. I’m not sure what the Before Curve is supposed to do, but I imagine it’s a way to getting around what I did in the beginning with the saturation and so on. However, I always like to use an S-curve to get the tonalities I want. The degree to which I mess with the sliders depends on the the image, of course.
And here’s the resulting JPEG.
Overall, this is a process I prefer to the Lightroom way of doing things. Really, with a fixed metadata process, I think it would be the perfect program for the way I think about photos. (At least so far)
A few days ago I created a page to keep track of various computer projects I’m working on. I figure this’ll help me keep track of what’s going on and what I’ve written about it and it’ll also maybe serve as a one-stop shop for visitors to the blog who want to see how I implement various projects.
This is the first post documenting my research so far on my Home Server Project. Here’s how I describe it at the moment:
A project to use some sandboxing – Project Atomic, VMs, Docker Containers (all, some, or none) to run home DNS, MySQL, game servers, and file server. Currently many of these all run on one Pogoplug computer which can cause issues during updates) Also eventually proxy when the little one starts using the net.
Its current status is “research”. For someone as technical as me, it ends up being more useful and promotes ease of use in the long run to run a home server. Currently I run DNS because nearly every diagnostic step when it comes to network problems involves resetting routers. This was especially true when I was on Verizon FiOS and using their router. I got tired of losing track of computers every time that happened. MySQL and file server functions run on BabyLuigi in order to have one, updated library for Kodi (formerly XBMC). Game server is for Team Fortress 2. Finally, a proxy would be nice for when Scarlett starts using the net.
So I started looking around earlier this year and one of the things that seemed like it was meant to be perfect for this is the Docker Container. It basically creates a application-level VM (as a simplification) that would allow each of the types of functions I’m looking looking at doing to be individualized. They don’t affect each other, don’t use too many resources, and can be moved from computer to computer. Of course, the best way to run this on the fewest resources is Project Atomic. Project Atomic is meant to be a minimum install (currently CentOS, Red Hat, or Fedora) upon which the Docker Containers can run.
I spent a few days looking at Project Atomic after having spent a few days looking at Docker Containers. It would be the most correct way to implement what I want. But it’s also unnecessarily complicated. While it would take me just a couple hours to set everything up on a barebones computer or in a VM, it would take me days to get everything configured with Project Atomic and Containers. And the way these things have to interact, including networking, are just way too complex for me. It’d make sense to setup at work, but at home….no.
So, the question is whether I’d rather run it in a VM or on a barebones computer. Or, to be more accurate, should I run it barebones, in a VM, or in multiple VMs. That’s the question I’ll be next tackling as part of the research.
After having filed some bugs and spent a bit of time trying to figure out what’s going on, it appears that the issue with the metadata not carrying over from my DNG and CR2 files to the JPEG is not in any way RawTherapee’s fault. The problem is where Exiv2, the library used by Digikam, is expecting to look for this data. Of course, what I don’t understand about this is that Exiv2 is what wrote the data to begin with. Why write it to a location they were not going to be able to read from? Or maybe they only expect it to be there in DNG and CR2 files, but not JPEGs?
I don’t know here are the bugs I’ve filed in case someone else can make sense of it:
I’m going to have to post some DNGs to those bug reports so they can make sure not to be thrown off by the fact that it is not best practice to write to a CR2 file.
But what struck me most about this issue is the fact that the strength of the standards in that they can be expanded upon as camera manufacturers both expand upon what they capture and create new functionality to be captured ends up also being a weakness. For the data can be stored in various places that different tools do not expect to look in. This is the norm when it comes to standards. This is why it is recommended that you use all your WiFi equipment from the same manufacturer, for example.
In the case of my photos, this has resulted in me being somewhat upset. These feelings don’t really even have much to do with the present Digikam/Exiv2 issue. It’s upsetting that these things are not standard enough to guarantee that those who inherit my digital photos will be able to get all the tags, titles, captions, etc that I add. After all, the best thing about digital photos when it comes to documenting the family memories is that, unlike regular photos, there doesn’t have to be a consultation to the oldest living family member to find out who’s in the photo. A well-tagged photo in combination with facial recognition and a large enough database of photos makes it so that at the very least you know who is in the photo and at best you can correlate the names to the faces in the photo. If I take my own family as a representation, then there’ll maybe be 1-2 people per generation who will care about this stuff. But, having been frustrated in my attempts to work my way up the family tree, I’d like for it to neither be my fault nor the fault of technology that my descendants would be lacking in information. At least, if they maintain my folder structure, the dates and events will be roughly identifiable.
In the meanwhile, should I need to replace my photo hard drive soon (the point at which I’d like to make the switch in order to have btrfs instead of NTFS) there are at least a couple workarounds. First of all, I plan to provide a DNG to the Exiv2 guys (I’d given them a CR2 before) and help them in any way I can in order that all users can have a better product. Second, in a blog post I’ve yet written, I will talk about how I most likely won’t be making JPEGs of all photos anyway. This limits the ramifications of this issue. So I can always just sync up the metadata manually for onesy-twosy files. Third, if I need to do a huge batch of JPEGs, I can always create a script to use exiftool and Exiv2 to make sure the metadata is in the right place for Digikam to read it from the files.
Eventually (probably by 2016 when Exiv2 0.26 comes out), this’ll be a moot point in this particular case, but, with the way standards are written, it’ll continue to be an issue in one way or another going forward. And they’re just starting to figure out the best way to do metadata for videos….
Scarlett asked to talk in my field recorder, but she called it a lightsaber. “Daddy, can I talk in your lightsaber?”
Here’s what came of that:
I asked my wife if she could show me how to cut a mango. This is what Scarlett said:
You use scissors! And you cut! Open. Close. Open. Close. Put your fingers in the holes. Open. Close.
And then I tried to recreate the situation while recording it. Here’s what she said with my prompting:
Two redeeming bits of news for RawTherapee (even though one of them means there’s still something to be solved before I can switch completely to this new bit of software).
Last Fall I started considering moving away from Lightroom after having used it for nearly a decade. Back then I was making use of the student price to actually be able to afford it. Competition from Apple Aperture and other programs caused it to eventually drop to $150 per version. But Adobe seemed to be moving more and more towards a subscription-only model. Lightroom is still available standalone, but it appears the rest of the CS suite (including, for example, Photoshop) are on the treadmill now. While there are surely some benefits to being able to rent Photoshop and Adobe’s awesome video editing software when you need to do a project rather than for a thousand-plus fee, one way I’ve afforded Lightroom is not upgrading every year. So while it’s cheaper to pay monthly than buying outright (at the prices they had when they went subscription), I rarely found the upgrades worth is and so was able to save some money. I started considering alternatives. But I’d had Lightroom 5 on my wishlist and someone bought it for me for Christmas. So I figured I’d be a Lightroom user for a few more years.
The problem is that Lightroom 5 is horrible. Without providing any noticeable improvements over 3, it is so slow and memory-intensive compared to 3. Things just take forever and that delay costs me free time. A coworker says his experience is the same with Lightroom post version 3. But can I do what I need to do with Linux?
It certainly has some benefits. For one, I get to use my dual monitor setup to perhaps light-table the images at a larger size. The btrfs file system has matured a lot and it has natural protections against bit rot (even moreso if you have a RAID1 or greater setup). Finally, the software is free in all senses of the word. Even if performance is no better than Lightroom, I’m saving myself about $150 every 1-2 years (assuming they don’t end up forcing the subscription model as they’ve done with the Creative Suite).
Yesterday I posted my typical Adobe Lightroom RAW workflow. If I can keep it mostly the same, that helps in not wasting my time on a learning curve. But perhaps there’s room for a new workflow. So I googled and I posted on forums and experimented. First off, the reason I always create JPEGs of my DNGs is so that people can see my files even if they don’t use Lightroom or Photoshop. Basically, JPEGs are everywhere so I am sure that my descendants will have access to the photos. But I discovered something about Dolphin, the KDE file manager – it can already see DNGs!
So they’d be able to see the photos. And also I’m sure with a very quick Google search they’d find a multitude of open source projects that could produce a JPEG for printing. So, whether or not I leave Adobe Lightroom, I’m definitely no longer creating JPEGs of anything but the best photos – the ones I’d either post to flickr or my blog. Also, now that Lightroom has caught up with Digikam and can see videos, I’ve had issues with trying to do exports and other operations on video files in the same folder as pictures. I may end up making a sub-folder for videos.
So, let me first present what happened yesterday when I played around with some DNGs – the same ones I used in the post on my Lightroom workflow. They imported into Digikam without any issues.
The image I see is the embedded JPEG – the camera’s estimate of what it might look like after processing. I tried some editing of the DNGs, but it was an exercise in frustration. Without reading the documentation, it looks as though the image is already converted from RAW and you’re making changes to the embedded JPEG. I will need to take a look at the documentation if I want to figure out what I need to do there to edit the RAW file before doing stuff in Digikam.
The best of breed for open source RAW image manipulation on Linux at this point seems to be a fight between RawTherapee and DarkTable. Ideally, from the point of view of using Digikam to catalog my photos (with its many, many useful features), RawTherapee would be the best program. I would catalog (or, to use the right term, Digital Asset Management) in Digikam and use it for my phone and other JPEG-only cameras. I would send RAWs to RawTherapee for processing and then manage them within Digikam. On the plus side, I can already send the image to RawTherapee from within Digikam:
Out of the gate, this was not the situation. This is what my DNG looked like when opened in RAW:
So….that’s not exactly desirable. Darktable gives something more like what I expected, if a bit darker than the embedded JPEG (and I think Adobe Lightroom’s default interpretation):
That doesn’t mean RawTherapee is automatically out of the running. Even though DarkTable would probably be easier to pickup (as its interface apes Lightroom), it has a library of its own and I’d rather not duplicate things unless absolutely necessary.
Finally, unfortunately, the embedded JPEG did not update from within RawTherapee, meaning that I had to save out a PNG (or JPEG) of the changed I’d made to the DNG. In fact, I’m not sure the changes were written to the file, meaning if I opened up it up somewhere else when RawTherapee ceased to exist, it would no longer have my changes.
OK, OK, but that was just one day of futzing around. Let’s get a little more scientific. First of all, while DNG is certainly more portable thatn CR2 or any other proprietary file format, I’m happy to report that Dolphin can read it too. So if, for whatever reason, DNG conversion is a bust on Linux, I’m not too sunk.
Not bad, eh? Ok, let’s try a slightly more robust look at a workflow for Day 2. I’m going to attempt to use only Digikam and RawTherapee. I took some photos today outside as I figure that’ll provide a better idea of color balancing issues and am doing a comparison between Lightroom and RawTherapee; also I’m evaluating the work flow. Ok, let’s start off by looking at the files in Digikam.
It looks exactly as it does in Lightroom (I’ll show the comparison later) which makes sense as it’s working from the same embedded JPEG for its initial preview. Now, what I would do in a real workflow is to tag the photos. I’ll do that now.
OK. So now I expect that if I send them to RawTherapee and back that I won’t lose the tags. Let’s see what happens. First, let’s open the DNG in RawTherapee.
There’s that strange pink again. Well, let’s see what we can do. So first of all, I try a spot white balance and we’re off to a good start.
Much better. So it appears that, by default, the white balance is off. Not a biggie, I’m sure I could save that as a default to apply to all photos. But it’s a bit hazy – a bit Holga-ish for my tastes. Let’s see what I can do about that.
I click a bit, unsure of how to get this photo not to look so faded. Ok, so I mess around with the sliders. Surely this is a program meant for those who want the utmost control over what’s going on to their raw files. Here’s how it looked in Lightroom:
Here’s what I got it to in RawTherapee:
I ended up with a way more contrast-y image. It’s not bad. It’s not hazy anymore. Not quite real life, but neither was Lightroom. I would have punched that one up a bit. Ok, so, workflow-wise here’s what I want: 1) I want the image’s embedded JPEG to update to reflect this. 2) I want the metadata to be intact not only on the DNG, but on any JPEGs I develop. Let’s see what happens:
Annoyingly it has lost the title and the tag. It kept the caption. I wonder if there’s a setting I missed in RawTherapee. The PNG doesn’t even get the caption copied over. I check the settings and it SHOULD be copying over:
Next I’ll try the CR2 in case RawTherapee is able to read that to be more like the embedded JPEG out of the box. Ah, now look at this!
So if I want the easiest time possible with RawTherapee, I’d be best off not converting to DNG. At least not before processing the file! And perhaps it is because I converted to DNG 7.1 for the other photos?? This time I’m able to work just as I might in Lightroom, just adjust the vibrance a bit and it’s ready to go!
Now to export to JPEG:
Unfortunately, starting from a CR2 did not prevent the tags from being lost.
1. The most important aspect, tags and other metadata being preserved seems to be broken or missing. Or implemented in a way that doesn’t work with Digikam.
2. The embedded JPEG is not being saved to the DNG file or the CR2 file. Not the end of the world, but keeps Digikam from being as useful as it could be.
3. RawTherapee is complex, but can produce harmonious images when it has a good starting point (ie RAW, not DNG)
4. I didn’t really test DarkTable – may do so soon.
5. For now, looks like I’ll have to investigate before I can make the jump away from Adobe Lightroom.
I wanted to record this as it may make future discussions on forums, mailing lists, and even on this blog make more sense.
For those who are sight impaired or have low bandwidth limitations, essentially:
note: This is a blog post about fine art nude photography. While there is no pornography or erotic image on this page, you may not want to load it up at work. Also, to see all my work with this model on flickr, you’ll have to sign in so they can verify that you’re old enough to see the photographs
To keep my front page from being NSFW, I’m going to use a MORE tag to make you jump to see the photos themselves. See you after the jump.
Yesterday I put in for the GOG Galaxy Beta and today I got my invite. I couldn’t wait to get home to see it in action. I did not bury the lead, it was exactly as I state in the title, A Good First Start. The settings are so minimal at this point that it doesn’t have any tabs:
As you can see, many of the most exciting features are marked as coming soon. Still, it’s exactly as I hoped they’d do it. I meant to remark in my last blog post that I hoped they’d make the game pages just like their webpages. I find their webpages very, very useful. It’s less cluttered than Steam and brings the reviews to the fore. Let’s take a quick screenshot tour of the client. (I was going to do a screen capture video, but the client is so simple at this point that a few screenshots will do it justice)
I don’t like the default game list because it’s an incomplete skeumorphism compared to the page on the web:
You don’t get the pretty box art or the nice shelves. You just get a bulky view that would not be fun to scroll through if you had lots of games. But I do like the other view currently provided:
A purchased game’s page is similar to the way it looks in Steam, but more barren without Achievements:
One thing Steam doesn’t do that would be nice for GOG to adopt is to have a description of the game on this page. Just because I bought it doesn’t mean I remember what the game is; especially if I bought it during a huge sale.
I was wondering how they’d deal with old purchases. Under the “More” button you can import a folder if you already downloaded the game. I didn’t try it with The Witcher because I can’t risk it screwing up my save files. I’m too far in to start over. But it seemed to work with Fallout and Dungeon Keeper Gold. With Sam and Max it appears it re-downloaded everything because of a change in folder naming convention. For the other two it did download a small file, so perhaps there was a patch I needed.
As you can see on the left, the games you can play are in a column that is visible no matter what page you’re on. Downloads and the queue can be viewed from that little arrow in the bottom left corner. At the moment I’m indifferent to both of these differences from Steam. They’re different, but not necessarily better or worse. For example, it’s nice to be able to launch any game from your library without having to go to the library page. At the same time, it’s not a big deal to just click library. So that’s horizontal space that … well … let’s face it, with Steam on a wide monitor most of that space is just empty anyway. No user-generated categories yet. They do have a comprehensive search on the library page, but I do like to make categories for games I’m currently playing or games I play often.
Overall, it’s a pretty stable beta and they are off to an excellent start. I’m already happier that it can auto-update my games. I look forward to their continued improvements.
If I mentioned it on this site, I wasn’t able to find it in a search because of the generic word, but I was very annoyed and pretty upset about EA’s Origin store and platform. Part of what I enjoy so much about playing PC games is that the only limitations on what you can run are based on OS and the power of your hardware. In the console world there are games exclusive to Playstation or Xbox and for the non-exclusive games I have to figure out (if I’m planning to game socially) which platform my friends and family are going to buy the game on. For PC games that’s not an issue. All games run on Windows and a greater and greater number run on Linux and OSX. Usually, no matter the OS, everyone can play together online.
The part of PC gaming that made me upset about EA’s Origin was the limiting of all (or most of the big ones) their games to that platform. Before that you could choose Amazon, Steam, or any other way to get the games digitally. It’s OK for Origin to exist, but simply because I HAD to use it for EA rather than it being a choice (like maybe if they had some better sales than Steam), I’ve essentially boycotted it. It’s the reason I still haven’t played Mass Effect 3. Its mandated use also means someone else needs to have my credit card info, another folder with games, another place to check for deals, etc. It just makes things more complex than they need to be. Also, then Ubisoft and others started making noises of further fragmentation. Ugh! Don’t increase the friction for me to spend money on your games!
But I think that GOG’s new Steam-alike (I’m sure others have independently coined that – feel free to spread it around) is a good thing. Why?
Because, unlike EA’s Origin, GOG Galaxy is not publisher specific. Additionally, GOG is providing a competitive difference over Steam in a few different ways. It is the marketplace working as intended! (Rather than being used to keep incumbents in power) Also, it was badly needed as it was keeping me from spending more money there.
So, first, what is GOG providing to differentiate itself from Steam? Perhaps because it began life as Good Old Games, using Dosbox and other means to get old games to run on modern computers (and getting licenses from the creators to do so), they have a strong commitment to providing DRM-free games. This is something I value and something that more and more people are valuing as word of its annoyances spread beyond those of us in the techie/EFF/free software world. I’ve been burned a few times before by DRM, so given the choice between the same game with DRM and without, I’ll choose without, thank you. In fact, recently I’ve actually chosen to buy a game on GOG rather than Steam just for this fact. Additionally, many publishers implement this via needing a constant Internet connection. While I’m blessed to live in a place where I can actually choose between FiOS and Comcast (Sure, both are evil, but I have a choice), Ars recently did a story on how even in the suburbs of some pretty major cities, it can be hard to get broadband. Also, sometimes the Internet goes out. That’s when you most need to be able to read or play games that don’t need an Internet connection. Additionally, nearly every GOG game comes with lots of fun freebies: manuals, desktop backgrounds, soundtracks, user icons, and more. Some of this is more valuable than others, but it’s still a nice perk.
As you can see, I think it’s pretty important. Also, it helps to send a message to the publishers. After a few years of Humble Bundle, I think publishers have seen (and quite a few of the indies have been vocal in publishing) that their illicit download rates (misnamed piracy) have been somewhere between unchanged and lower without DRM. So they get to not have to pay to license that and the consumer gets a better experience. In fact, I think if GOG gets their act together perhaps instead of Steam keys (or in lieu of Steam keys), Humble will be providing GOG keys to their games. I think it’d work quite well. I always find it weird to be buying DRM-free games on Humble, but because I like everything managed from the same place, I end up getting what is likely a DRM-filled version from Steam when I redeem my Steam keys.
Now onto why I think GOG needs or, as you’ll see from their PR material, could benefit from Galaxy, their Steam-alike. When I first got involved with GOG (I think they were having a giveaway or super cheap version of Fallout or Dungeon Keeper), here’s the only interface I had for the games:
There’s something kind of pretty for it. For one thing it reminds me of box art which, as in music, is becoming a lost art. For another, it reminds me shelves as a way to showcase one’s interests. But when it comes to downloading the game (and extra goodies) – it’s somewhat clunky. So GOG introduced their downloader:
It was a step up in a few ways. It allowed me to easily download the game an all its goodies at once. It resumed interrupted downloads. It let me know there were updates, but didn’t make actual updating any easier. I was never sure I’d actually updated whatever I was supposed to update. This, more than anything, made me hesitate a bit on whether to buy a game on GOG or Steam – especially when it was newer and I thought it might have lots of patching to do. So it seemed that Galaxy was the next logical step.
However, GOG gets some advantages based on both their philosophy (best demonstrated by their anti-DRM stance) and on being late to the party. They’re able to design a platform that does Steam better than Steam does Steam.
This is best demonstrated by 1) their insistence that the games will work without Internet and EVEN BETTER 2) every aspect of GOG Galaxy (including whether or not to use it is optional). So if you like auto-updates you can turn it on. If not, don’t worry. If you like chatting, turn it on….and so forth.
Again, lots of optional features, many of them great! For example, I don’t use achievements to determine how I stack up to others because it can too easily be gamed. But I *do* like achievements to help encourage me to try new features or to reward me for exceptional gameplay. (I’ve blogged about that before, but don’t have time to look up now) Crossplay gets to what I was talking about in the beginning of this blog post – PC gaming should continue to be unified! It’s one of its greatest strengths against console gaming. Time tracking is slightly less important because of my use of Raptr, but it’s a nice feature.
These are great things to promise. I hope that GOG can continue to stand on principles even as it adds more Triple A games. (Humble Bundle continues to be a great source of entertainment for me, but it has long since abandoned its DRM-free stance and Linux/OSX stance. However, I do admire that they do call out which games ARE DRM-free) It is the difference that makes GOG worth a look. Otherwise, they’re just a Steam wannabe. But if they can stick to their principles, they can help change the gaming world like Amazon did with MP3s. (Yeah, people like to put that at St Steve Jobs’ feet, but he only did that in reaction to losing his grip to Amazon)
Finally, I forgot to mention it above, but one thing where Steam excels is in facilitating Linux installs. Games just work whether you have Ubuntu (and derivaties) or Fedora (and derivaties). Hopefully GOG Galaxy in its final form can provide this ease of use because a good chunk of their newer games also work on Linux.
I’ll have another blog post when I get into the Beta program.