Of course the Postgres website lists several community resources which I would encourage you to check out, but given the recent kerfuffle with the freenode irc community I thought it might be good to highlight some additional options. Luckily there are a bunch of them out there, and they are all free to join. The following list is not exhaustive by any means, but these are the regular postgres gatherings that I visit at least occasionally and I think you’ll be able to get something out of as well.
PostgresTeam.Slack.com - Ok, this one is listed on the community page, but since I use slack regularly for work, the Postgres slack team has become my daily driver. The #general channel serves as the primary spot for general Q&A, but there are also topic specific channels like #pgbackrest and #patroni, not to mention general information channels like #postgres-job-offers and #pgsql-commits. So far we’ve had over 10,000 people sign-up for the Postgres slack, and the community continues to grow at a steady pace. If you’ve not used slack before, one of the nice things about it is it has excellent clients for the desktop, through your favorite web browser, and even on mobile; just in case you need to get your postgres fix on the go. (What? I can’t be the only one!)
Stackoverflow - Well, technically https://dba.stackexchange.com, but in any case, your favorite technical question / answer site has a site dedicated to databases. While I have a lot of concerns about the recent purchase, I don’t see anything that comes close to an alternative given what it offers. One of the nice things about stackexchange is that it works a little better for long form questions that require more detail and less back-and-forth troubleshooting like you might do on Slack. It also better embraces the asynch nature of the web, which is not to say that you’ll have to wait long, the community there is pretty active and answers can often come in minutes. Oh, if you go now, they are running their annual Developer Survey; if you sign up be sure to represent the Postgres community :-)
The Postgres blogging community is also pretty active, and you can certainly get a lot of good information through posts, and get some questions answered via comments on those topics. While I don’t have a dedicated process for reading Postgres blogs, I find that I do end up coming back to certain ones time and again. If you don’t have a favorite, just keep the Planet Postgres site handy and you’ll have the chance to check out many of the most active Postgres bloggers and assemble your own list.
Of course there is always twitter. Best for quick questions and finding out what’s new in the world of Postgres, many folks in the community are active and willing to answer (short!) questions on twitter. While there aren’t any official hashtags, I’d recommend following (or tagging) tweets with #postgres or #postgresfriends (or #pghelp if you’re optimistic) to get started, and through those you’ll be able to uncover active community members that might also be worth following.
Finally I also want to give a nod to IRC. I’ve been visiting the postgres channel on freenode for nearly 20 years and while the recent changes were a tad depressing the community members on IRC have bailed me out plenty of times, and I’m certainly thankful for thier help over the years. You can read the official migration announcement about the irc team moving to the Libera Chat network, and the Libera Chat folks also have some nice docs on access IRC, whether through a dedicated IRC client or through a web based client (I’m currently trialing this). The primary community channel on IRC is #postgresql, but there are a number of other options; check out the community irc page for more info. If you don’t like Slack for some reason and want to do chat, IRC is still a nice option.
As I mentioned before, this list is certainly not exhaustive. If you don’t like these options, you’re only a google search away from other ones, especially if you are looking for regional or language specific options.
In my experience most of these groups are quite welcoming to new users and happy to answer questions on all sorts of Postgres related topics. Of course, given the distributed nature of the project, and being on the internet in general, you are likely to encounter all different sorts of opinions and folks living in their own world; remember to approach new things with an open mind and find the right fit for you.
]]>select now();
for my actual query, as it’s a bit more illustrative.)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
My first attempt, without any thought really, was that the right syntax for this was \g 10
. When it didn’t work, I stepped through the idea, first verifying my select was good, then verifying \g
worked as expected, and then trying the \g 10
on it’s own. When that didn’t work, I double checked the docs and then hit irc to ask if anyone remember that syntax working… of course the answer was no.
If you are wondering, I think I was conflating \g
(which among other things causes psql to either execute the query on the line or re-run the previous query) with \w
which also re-runs the previous query, but takes an argument equal to the number of seconds to delay between each run; so \w 10
will “watch” your query every 10 seconds for eternity.
So, that didn’t work, so how to do this easily? Well, the best suggestion from irc was to wrap the query in a quick shell loop, which I admit is a simpel enough way to solve this, but to be honest I wanted an sql level way to handle this. The most obvious solution there was to wrap the query into a DO
script and just loop through 10 times, but even that felt more cumbersome than it should have been, not to mention that would have put all 10 queries in the same transaction context, which probably didn’t matter, but wasn’t something I wanted to think about.
And that’s when \gexec
popped into my head. Ok, it doesn’t hurt I had just read the docs; but postgres has such a large feature set that even us old timers forget all the things it can do. For the record, the docs describe \gexec
as so:
Sends the current query buffer to the server, then treats each column of each row of the query’s output (if any) as a SQL statement to be executed.
Ok, there’s actually more, so go check out the docs, but the main part here was if I could just generate the query enough times, then I could use \gexec
to run it for me. Of course anytime you’re dealing with loops at the SQL level, generate_series()
should come to mind, and so marrying the two, you get:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
By using generate_series()
I can generate exactly how many copies of the statement I want (and dynamically substituate in info as needed) and each query will run as it’s own statement, all without leaving psql. It’s the little things eh?
Note: If you like this post and think \gexec
is going to be a usefel addition for your tool box, you may be equally excited to know that just this week Postgres released a round of security fixes which includes a fix for a nasty exploit involving \gexec
. Yeah, that sucks, but maybe now you’ll get some value out of the thing you have to patch. Take what you can get, it’s 2020.
Note redux: Thats what I get for not double checking. The security fix was for \gset
, not \gexec
. Apologies for any confusion and/or if you accidentally upgraded your database because of my post.
This release incorporates the following changes:
Note this release drops support for PHP 7.1, and will be the last release to support PHP 7.2.
For more information on phpPgAdmin, check out our project page at https://github.com/phppgadmin/phppgadmin/
You can download the release at: https://github.com/phppgadmin/phppgadmin/releases/tag/REL_7-13-0
For complete details of changes, please see the HISTORY file and/or commit logs. We hope you find this new release helpful!
Package verification codes:
shasum 6.01
But last year, as OmniTI begat credativ U.S., I made the swap to a full-time remote gig, and no longer need to commute. I still do occasional work trips, or visit meetups, or whatever, but it seems like we don’t generally need to have two cars, so because the lease s now ended on our small SUV (we own our other car), we turned it in, and are going to see how this one car thing works out. We did some rough math, and figure we’ll save at around $300-$400/month, between car cost, insurance, tags/title, and other related maintenance. Amber still has a commute, but we think that in the cases where needed, I can give her a ride to work (or her give me a ride to the airport), and if we do have a conflict beyond that, we can use Uber/Lfyt to off-set that, and likely at a rate of less than $300/month. Or maybe we’ll get it all wrong and go get another car, but for now, we’re curious to see how it goes.
P.S. For those who have been around a while, yes, we are technically still a two car family. Ole Maggie Miata still sits in the garage, now retired and off the road. If we do need to get a second car, there is a likely chance we’ll fix her up and get her back on the road. Even if we fail, it’s a win!
]]>I am admittedly late to the Chernobyl party. When the series initially aired in May, I ignored it, thinking I’d binge-watch it at some point once all the episodes had aired (it is a 5 part mini-series). I did try to watch it late one night during the summer, but if I am being honest, I passed out on the couch before the opening credits finished in the first episode. Maybe Chernobyl wasn’t for me. And then last fall I happened to end up on a trans-Atlantic flight with 5+ hours to kill so I thought I’d give it another shot. After all, it is much hard to fall asleep on a plane than on my couch.
And then I was hooked. Since that flight, I have recommended the show to many people, especially those folks I know in the WebOps space who are students of Design Thinking, Human Factors, Resilience Engineering, or Safety Science. It isn’t that the show is not without some flaws; the more you read and learn, you see that there are parts of it that are misrepresented or made up; it is dramatic storytelling after all. But if you have read the literature on complex systems failures or seen failure pathologies documented from the medical, aviation, and nuclear engineering industries, you will immediately recognize the behaviors that surface during the recreation of the accident in the show. Not to mention, the Soviet government does a fine job as a stand-in for internet companies today who are kind-of-sort-of forced to admit when problems go wrong and yet often try to do so without providing any details or disclosing the true nature of the problems.
One of the things so fascinating about Chernobyl, for me at least, is I remember this being in the news as a kid, and that it seemed like it wasn’t that big of a deal (or at least, not as big of a deal as I thought it should be). Of course, watching this now, it all makes sense since the magnitude of potential destruction was almost incomprehensible (i.e., wiping out most of eastern Europe) compared to what the Soviets were telling everyone at the time. The lasting effects, which were certainly not a worst-case scenario, were still harmful enough that it made me wonder how much of this accident helped lead to the break up of the U.S.S.R. just a few years later.
Since watching the series, I’ve now gone on to watch a few other shows on Chernobyl, and if you like the show, I’d recommend these as well. One was a 1-hour documentary called ”Chernobyl: As We Watched” which I caught on something called the “Americas Hero Network”. The other was a show titled ”Building Chernobyls MegaTomb”, which highlights a more recent engineering effort to build a new shield over the reactor before the previous one failed. Yes, this is a disaster that will continue to need management for hundreds of years.
]]>This release incorporates the following changes:
Note this new version now requires support for the mbstring module in PHP.
For more information on phpPgAdmin, check out our project page at https://github.com/phppgadmin/phppgadmin/
You can download the release at: https://github.com/phppgadmin/phppgadmin/releases/tag/REL_7-12-1
Special thanks to Jean-Michel Vourgère, who supplied a number of significant patches and updates towards this release. For complete details of changes, please see the HISTORY file and/or commit logs. We hope you find this new release helpful!
]]>Join, or Die
Because the Postgres project has no single owner, the Postgres community has always been a little bit fractured and doesn’t always speak with one voice. As users, this means the community can look rather different depending on which vendors you work with, the country you live in, the tooling you use, or the online communities you interact with. Since these different groups aren’t always as coordinated as one would hope, initiatives like this can sometimes be harder to push forward, and I think this survey did suffer from that; it only made it out to about 500 people which is a pretty small subset, and you have to keep this in mind before making too large of conclusions about what you see in the data.
Slow and steady growth
39% of respondents have been using Postgres for less than 5 years, with 10% having started within the last 2 years. I’ve seen surveys from communities where they suddenly catch fire and 50% have used it in less than a year, and 90% less than two years (rhymes with shmocker?) and it becomes really hard for those communities to manage that, so this seems like a positive, and helps confirm that Postgres is growing at a solid pace, but not in a way that is likely to be damaging for the community.
You do what now?
Technical titles are hard, but with more than half of the survey respondents reporting some kind of developer-oriented job title, and 50% saying they work in software companies, it is again a good reminder that Postgres isn’t just for DBA’s, and that most peoples interactions with the software are coming from non-traditional outlets. I’ve spent some time coordinating between the Postgres Funds Group and The U.S. PostgreSQL Association this year to ensure a presence at shows like Pycon, Railsconf, and All Things Open, among others, and I hope to see this trend continue into next year.
About those clouds
The answers related to running Postgres on-prem vs the cloud were a bit hard to decipher. We can safely assume about 1/3 of folks are running on fully managed Postgres, but we don’t know how many of those people are also manually managing instances as well. (We do both and I expect others do the same depending on the size/scope of their deployment needs). I feel like I could make a hand-wavey argument that at least 15% of overall respondents are AWS customers, which seems like a pretty big number and will for some will probably exacerbate the rumblings that, relative to their code contributions, Amazon is not contributing their fair share. Granted that isn’t as surprising as the data on the other cloud providers; Azure/Citus didn’t even rank in the poll, which I just have to attribute to a skew based on Timescale’s reach, especially since GCP got a hefty 18%, which seems amazing considering how they have managed their Postgres offerings. (I have friends at GCP and I like the platform in general, but Postgres seems like a second class citizen the way they are currently running things)
Those quotes
Oy vey. I’m not sure if Timescale was picking quotes just to stir up some controversy (there are certainly more friendly ones in the raw data), but the quotes about NoSQL are a bit off-putting. This is an area where the community needs to continue improving because we have a reputation for sometimes being “stand-offish”. Not in all cases of course, but if you want to find people with strong opinions who are not afraid to speak out, the Postgres community has lots of them. (Perhaps this blog post is a case in point) Anyway, given at least 50% of respondents are using at least one NoSQL system in conjunction with Postgres; and based on modern infrastructure patterns that isn’t going to change; we need to learn to focus on helping people where they are, rather than where we think they should be, and being less abrasive about it in general.
All in all, I hope this information will be useful for the community, and I want to thank the Timescale folks for publishing the results (and the raw data), and I hope they will continue to do this and/or work within the community to expand the reach of this survey next year.
]]>As with many software releases, the code changes are plenty, and the release bullets are few, but they are quite important. In this release we have:
PHP 7 is now the default version for development, and the minimum version required for phpPgAdmin going forward. Most users are currently running PHP 7, so we’re happy to support this going forward, and encourage users of PHP 5.x to upgrade for continued support.
We’ve added support for all current versions of PostgreSQL, including the pending PostgreSQL 12 release. Our aim going forward will be to ensure that we are properly supporting all current release of Postgres, with degraded support for EOL versions.
We’ve updated some internal libraries, fixed additional bugs, and merged many patches that had accumulated over the years. We want to thank everyone who provided a patch, whether merged or not, and hope you will consider contributing to phpPgAdmin in the future.
This version also comes with a change to our development and release cycle process. When the project originally started, we developed and released new versions like traditional desktop software; annual-ish releases for new versions with all the new features, while providing a few periodic bugfix releases in between. While this was ok from a developers point of view, that meant users had to wait for months (and in unfortunate cases, years) between releases to get new code. As developers, we never felt that pain, because developers would just run code directly from git master. As it turns out, that is a much better experience, and as much of the software world has changed to embrace that idea, our process is going to change as well.
The first part of this is changing how we number our releases. Going forward, our versions numbers will represent:
- the primary PHP version supported (7),
- the most recent version of PostgreSQL supported (12),
- and the particular release number in that series (0).
Our plan is to continue developing on this branch (7_12) and releasing new features and bug fixes as often as needed. At some point about a year from now, after PostgreSQL has branched for Postgres 13/14, we’ll incorporate that into an official release, and bump our release number to 7.13.0. Presumably, in a few years, there will eventually be a release of PHP 8, and we’ll start planning that change at that time. We hope this will make it easier for both users and contributors going forward.
For more information on phpPgAdmin, check out our project page at https://github.com/phppgadmin/phppgadmin/
You can download the release at: https://github.com/phppgadmin/phppgadmin/releases/tag/REL_7-12-0
Once again, I want to thank everyone who has helped contribute to phpPgAdmin over the years. The project has gone through some ups and downs, but despite that is still used by a very large number of users and it enjoys a healthy developer ecosystem. We hope you find this new release helpful!
]]>CALL
syntax vs Postgres traditional use of SELECT function();
syntax. So it struck me as odd earlier this year when I noticed that, despite the hoopla, that a year later that there was almost zero in the way of presentations and blog posts on either the new stored procedure functionality or the use of plpgsql in general.
And so I got the idea that maybe I would write such a talk and present it at PGCon; a nod to the past and the many years I’ve spent working with plpgsql in a variety of roles. The commitee liked the idea (disclosure that I am on the pgcon committee, but didn’t advocate for myself) and so this talk was born. For a first time talk I think it turned out well, though it could definitly use some polish; but I’m happy that it did help spark some conversation and actually has given me a few items worth following up on, hopefully in future blog posts.
Video should be available in a few weeks, but for now, I’ve gone ahead and uploaded the slides on slideshare.
]]>Now for the backstory…
After much hoopla a few years back about new admin clients and talk of the pgAdmin rewrite, most of the regular contributors had pretty much moved on from the project, hoping to see a clearly better admin tool surface as a replacement. Instead, I saw multiple projects launch, none of which captured the hearts and minds so to speak, and saw the number of pull requests on an ever more abandonded looking project continue to pile up, not to mention thousands of downloads.
As for me, while not doing much publically, privately I was still maintaining two private copies of the code, one which had support for newer Postgres servers, and one which had support for PHP 7; both in rough shape. While my schedule doesn’t leave much time for random hacking, about a month ago I saw an upcoming block where I would be conferencing three weeks in a row and suspected I could probably find some time during my travels to do some updates. After a little bit of thought, I decided to do two releases. The first would add support up through Postgres 11, the most recently released version of the server software, and the second would add the aforementioned PHP 7 support. Granted, it’s taken longer than I had hoped, probably mostly because that’s how software engineering works, but also because I had to literally relearn how it is we were running this project, but I think I’ve got most of that worked out now.
I suspect the two releases might annoy some people, given that PHP 5.6 is years old and in many peoples minds EOL. But it turns out that a lot of people still run various 5.x versions of PHP, so this is a nod to that user base. If you are one of the people who has been waiting for a PHP 7 release, don’t worry. As mentioned I already have a patch set, so I’m hoping to have that completed in the next couple of weeks. Once that is released, I think it will make for a good base to start adding new features again; there is a bunch of stuff that could be added into phpPgAdmin, it’s just a matter of re-igniting the engine, so to speak.
In any case, there is life again in the old project. Long live open source.
]]>For those with a deep curiosity, this means my stack is now Octopress as a static site genarator. Those files are uploaded to Surge.sh which does static site hosting. And now Cloudfront sits in front of the site, managing DNS, providing caching, and the aforementioned SSL. I also keep a copy of the site at Heroku that I occasionally play with. This means my typical workflow is using vim for site updates, and git pushes to get things to the various places they need to go.
]]>One of the reasons Postgres 10 is significant for Pagila is that Pagila contains an example partitioned table, and as many have heard, Postgres 10 contains an initial release of simplified partitioning capabilities. While not particularly useful (yet) compared to the current partitioning capabilities, for the purposes of showing off examples, it was time to update the Pagila schema to show off this new feature. And given I had to do that work, I thought maybe it was time to make a new “official” release.
So, I’ve now created a new Pagila project page on Github. Unfortunately I was unable to get a full copy of the pervious versions from PgFoundry to do a full import, but I did have some copies of older releases lying around, so I used those to recreate the history in git for past branches. This means if you need a version of the schema that works on postgres 9.x, you can checkout one of the older branches. Once Postgres 10 is finally released, I’ll go ahead an tag/branch a version 10 release to complement it. In the mean time, please feel free to play with this, and if you’d like to contribute, you can find me on the Postgres Slack Team or submit a pull request through Github.
]]>I want to give a big thanks to Magnus Hagander and Dave Page, who did talk on earlier versions of 9.4, which was invaluable in helping me put together my own slide deck. Also thank to Michael Paquier and Hiekki Linnikagas who provided supplemental materials. Also no one could do these talks without the work of depesz; I would strongly encourage those looking for more information on 9.4 to check out his blog. Finally, I’d like to thank all of the Postgres Developers who have worked on the 9.4 release, without whom we wouldn’t have a release.
]]>One of the driving factors in my career has been my desire to work on the most important part of whatever it was I was working on. Early on this lead me to web development as a means to share knowledge between team members. After awhile, I came to the conclusion that Usability and Front End web work were the most important; those were the areas that directly impacted customer/users, and was more important than the quality of the code or the systems running things on. Get that wrong and nothing else matters. Eventually I ended up doing a 180 and started focusing on databases. It wasn’t that the database was important, but the data inside those systems was the thing I determined was the most important thing for a business. You can always replace your front end, your application code, even the servers themselves, but lose the customer data, and your done.
As time went by, my thoughts changed here as well. I still think data is the one thing that is irreplaceable, but as time went on, eventually I sought out larger challenges and more responsibility. When Theo and I discussed taking the COO role 2 years ago, I had recognized that we needed someone who could work across the different groups within OmniTI and help people to achieve thier goals. It was an area I thought I could have some impact, so I stepped into the role. At the time I didn’t worry about the next step, but if you think about this philosophy of taking on the most important work, and take it to it’s natural consequence, the role of CEO should have been more obvious. Maybe not at OmniTI, but at some point it was bound to happen.
]]>Prior to that, I recently gave a talk at the All Things Open conference entitled “Past, Present, and Pachyderm”. The original idea for the talk was to give a highlight of new features coming in Postgres 9.3, however we took a slightly different approach for the ATO2013 crowd, providing some history and discussion around the Postgres project, as well as taking a look at some ideas about future development and direction.
The talk went quite well, and I think really struck a good balance for speaking to a less Postgres focused crowd than the 9.3 talks I have given at Postgres specific conferences.
]]>But here is what I don’t get. This week I went to the All Things Open conference and while I was there, I happened to catch the tail end of a SkySQL talk on new MariaDB features. One of the features that he was describing apparently has issues if you work with MyISAM tables, so he asked how many people in the crowd used MyISAM. Not a single person raised their hand. For most database folks, this isn’t surprising; for most people doing traditional RDBMS work, you want an MVCC based system of some kind, so people using InnoDB seems like the logical choice. The problem here is, if your community is built around the idea of being free of Oracle, I think there is a problem if your user base is completely built around a technology still owned by Oracle.
So what are these investors buying with thier $20 million? If you are trying to secure the future of your database choice, I think this is a swing and a miss. Sure, MariaDB of today is better than MySQL of back then, but from a technology control standpoint, all you’ve done is buy yourself a ticket back to 2005, when Oracle first purchased Innobase and left MySQL scrambling. Any argument that you can make that the MariaDB community doesn’t have to worry about this is basically an argument for why MySQL users might as well stick with Oracle MySQL. I suppose that $20 million might buy another attempt at a new storage engine, but we’ve been down that road before, and it’s not pretty.
PS. If you’ve got $20 million and a desire to help Solaris users get free of Oracle, the OmniOS team would be happy to cash that check.
]]>Next week I’ll be heading to North Carolina to speak at the All Things Open conference. While there I’ll be stopping by the Triangle PUG, for a re-launch / kick-off get together. While I haven’t been to the RDU area for several years, I do know several people in the area so hopefully it will give me a chance to catch up some some old friends.
After that I’ll be headed to Dublin, Ireland, where I am speaking at PGConf.EU. While I have spoken in Europe & Russia several times, it will be my first Postgres conference there, so hopefully I’ll be able to cross paths with a number of the European community members who haven’t had the opportunity to make it to Canada. From there I head to Oberhaoussen, Germany, to talk at PGConf.DE. I’ve not been to Germany since I was a kid, so while I am bummed that I am missing out on Octoberfest, hopefully it will be a good time.
Wrapping all that up I’ll be heading to London to speak at Velocity EU, giving the talk I previewed at this months DevOpsDC meeting on “Less Alarming Alerts”. I really enjoyed Velocity EU and I’m happy to be heading back this year.
OmniTI has been doing quite a bit more business in Europe over the past year, so during my travels I’m planning to do a couple of client visits and meeting up with various friends and my newest co-worker Vasilis. If you’re planning to be at any of these events I hope you’ll stop and say hi.
]]>