G+MM are officially recommending the Friends+Me Google+ Exporter.
Friends+Me Google+ Exporter exists, works, is easy to use, is under active development, and the developer has been phenomenally responsive. All of this in direct contrast to Google's own Data Takeout.
This is especially the case for migrating G+ content to a new site. Wordpress and Blogger are directly supported, a usable JSON extract is also possible for other platforms.
The free product will export up to 800 posts per channel, the $20 license buys you all you can eat. The application runs locally, using your Chrome / Chromium browser, and no account data are sent off-site, other than to authenticate to Google, according to the developer.
The F+MGE will archive text content, photographs, and Communities (by any member), the last is not possible with Google Data Takeout. And there are more features under development which will be available to current users for incremental archive updates as the product is further improved.
Friends+Me has served Google+ users for years, and the endorsements received from people we know and trust play a strong role in our endorsement.
To be clear: we've had no consideration from Friends+Me or developer +Alois Bělaška other than a phenomenal and badly-needed tool, excellent support, and copious information freely provided. We thank him wholeheartedly for all this.
Download: https://gplus-exporter.friendsplus.me/
About: https://medium.com/google-plus-exporter/how-to-download-images-7dc321b6f179
Friends+Me Google+ Exporter exists, works, is easy to use, is under active development, and the developer has been phenomenally responsive. All of this in direct contrast to Google's own Data Takeout.
This is especially the case for migrating G+ content to a new site. Wordpress and Blogger are directly supported, a usable JSON extract is also possible for other platforms.
The free product will export up to 800 posts per channel, the $20 license buys you all you can eat. The application runs locally, using your Chrome / Chromium browser, and no account data are sent off-site, other than to authenticate to Google, according to the developer.
The F+MGE will archive text content, photographs, and Communities (by any member), the last is not possible with Google Data Takeout. And there are more features under development which will be available to current users for incremental archive updates as the product is further improved.
Friends+Me has served Google+ users for years, and the endorsements received from people we know and trust play a strong role in our endorsement.
To be clear: we've had no consideration from Friends+Me or developer +Alois Bělaška other than a phenomenal and badly-needed tool, excellent support, and copious information freely provided. We thank him wholeheartedly for all this.
Download: https://gplus-exporter.friendsplus.me/
About: https://medium.com/google-plus-exporter/how-to-download-images-7dc321b6f179
- Seems that G+ developers don't want us to use direct access directory indexes for all the data types in any cohesive non labor intensive ways. Could it be that all user data was immediately decompressed and stored in these incredibly inefficient unrelated data banking troves? I doubt it. Google is still testing us, for our otherwise un-surveilled developers abilities. G+ was a marketing surveillance goldmine for Google and they forgot to leave a conspicuous advertising money trail to justify their venal intrusion into the "Affective Social Media Marketplace." It was designed dirty because it is dirty. (Another lesson in corporate evil. "Work harder, not smarter.")2 uger
- Edward MorbiusEjer+Phillip Johnson ?? "direct access directory indices"?2 uger
- I bought my licence today.2 uger
- +Edward Morbius Sorry. Yes, our use of their product is broken up for storage. I mean that any time anyone within the corporation has to access user directories, you can be sure there has always existed simple date/time, author, subject indexes. (Just like my blog.) Periodically the marketing surveillance heuristics need to be checked for precision of utility. This can't be done on a piecemeal infinite scroll. Q: Am I right? You might be someone who knows. Thank you for asking.2 uger
- Edward MorbiusEjer+Phillip Johnson Somewhat clearer, though still not 100%.2 uger
- +Edward Morbius Dearest Ed. I am obsessed with the linguistics of code and code technologies. Just like math, the language is terse and often very formal.
There is a movement within some management training schools that let outsiders critique otherwise overly obscured conventions. Problem for many large corporations today is not just that no one really knows what each other is saying or doing anymore. But large institutions usually collapse under the weight of their own conservatism, paranoia and obscurity. Fortunately Code and it's language is not so much just a belligerent science, as much as it is an art form practiced within an elite guild of code craft. Immersion is usually the only bridge to literacy in such a refined socially constructed language.
I doubt that what I am describing is really that hard to follow. I just think the Google is found to be shamelessly naked. And we don't want to have to be the first to admit it. Thanks again.2 uger - Edward MorbiusEjer+Phillip Johnson So regards the "indexed access"....
I've been thinking for the past few days of what might happen if we were to go back to the rudiments of, say, email or Usenet, and start building up message formats and transport protocols from there, adding and removing as necessary. What you'd need to add or take away. What new capabilities we have now.
An obvious win would be the integrated metadata -- sender, recipients (defined ... somehow), dates, and potentially useful subject lines, though even in email that's hit-or-miss now.
I'm actually considering creating an mbox or maildir archive as one format for my G+ data takeout. Payloads in bodies, first sentence or so as a subject line, and then other bits. Run "notmuch" over that and have a highly-indexed, searchable trove. For my own use, not externally accessible. Text-only. But interesting.
And mutt (a console-mode email program) would be the client interface. Or any other email client. I'd have pretty much instant search by any of of mutt's standard fields: from, to, subject, date, full text. Could add a few additional custom fields -- attached URL(s), whether or not there's an image. Maybe even thread the discussion underneath using "In-reply-to" and "References" headers.
With a few tweaks, that could be a local Usenet spool, with similar access.
It's the sort of thing I'd long wanted out of G+.
And you're right, that's not the sort of capability we're getting from the present generation of distributed Web clients. For various reasons. Though I'm increasingly asking myself "why the goddamned hell not"?
Over the past few days, for various reasons, I've been speccing out just what the entire size of the Google+ text corpus would be. Stripped of its HTML, CSS, and JS packaging, and excluding images.
For Communities, it appears that there are on the order of 300 million messages, most quite short (20-40 words), call it 250 bytes of content per message, on average. The posting rate for the nearly 1 million active communities appears to be 1/wk, and over six years and some change, we get about 320 million posts. That is about 80 GB of text.
Larger than my typical mail spool (slightly), but not actually a horrendous amount of data.
Estimating the total G+ size is a bit sketchier, but it seems that non-Community posts may be 2.5x or so larger, which nets us about 1 TB total.
Mind delivered over HTML this bloats tremendously, well into the petabyte range. There are 800 kB of HTML/JS/CSS packaging in a basic G+ page to start, and then you start adding images (30% of all posts), at 4-24 MB each. That's ... considerable.
But if the platform had been designed with the thought of distributing content permanently to end users for their access, it ... would have been pretty doable. Subscribing to a stream, via Collections, Communities, Circles, or way back in the beginning of time, Sparks, could have happened.
There's a bit of a create, modify, update, destroy cycle to deal with: some posts are edited over time (most are not). And there's the question of keeping content nobody ever reads. But really, distributing content on a wide, if not global scale, is within the realm of reason.
And yet that's not what we have.
Why not?2 uger - +Edward Morbius Dear God, Morbius. You write the most beautiful and thoughtful comments.
When double clicking on to the elements features of the user end G+, it might have been possible to set up a personal subordinate indexing routine using some sort of meta tags. When I first opened up a Google Analytics account in order to trace out the dissemination of my very unique though slightly obscure magazine blog, my math graphics were a huge hit. It was suggested I use fewer meta tags and label my graphics. I'm somewhat sure my work was an issue for both the Google decompression algorithms and the recognition software identification for storage.
Personal URL's are one of the most useful features for item retrieval, but how many of us really took our selves that presciently in hand. Most of us thought we would be able to use our private URL's to collect and retrieve our hoards of data and communications. I was warned not to put my work out on Wikipedia because of the fragility of the social contract protecting the love I have put into my works on collectively coalesced cultural artifacts.
I have often speculated and written about the immense waste heat produced by the storage of all those pictures of the kids up at the lake, all expected to be retrievable at the click of the mouse. Consumer tech is the new magic box where people buy the product, but don't live in the post parochial elite communities that generate and hoard data. As a matter of fact there are even progressively fewer and fewer of us, who are in the know as to the whereabouts and what's going on behind the scenes of computer tech. Money was not originally the big driver to the sciences. The large tech corps are now like old school intelligence communities that are slaves to the "Need to Know" tyranny of (rational?) cowardice. We are instead at the burgeoning of new shared platforms for mutual skills acquisition.
I personally believe Google cut and ran from this very successful social media experiment which was designed to fail in advance. (No ads except Adwords.) Wise decision if one considers the probability that facebook will probably be tried for the distribution of violent propaganda and hate speech for profit, by the war crimes tribunal in the Hague.7 dage