There were two lines from the presentation that caught my ear:
Do you fear that some of your tasks will be automated, or hope that some of your tasks will be automated?
During this reskilling, what's the responsibility of the individual? What's the responsibility of the company? What's the responsibility of society?
The first one I understand. If the robots are coming for your job, learn how to make the robots that are coming for your job, then let them have it, and go on to do something else. The things you can automate at work are often not the things you wanted to be doing anyway.
The second one is a different variation on the question of what to do with people that are going to be displaced by AI. In the US, of course, there is a tendency toward the individual and her/his own problems to sort out. But there are multiple responsible parties in the game. If a mass of people get turned out of work (I doubt that will happen to the extent that the hype suggests), then the problems will be larger than the sum of the individuals who have them. Better to consider that before it happens.
I finally figured out Venture Cafe, after about two years of attending off-and-on.
Venture Cafe is a weekly mingling event in St. Louis's tech center. They use words like "collisions" and "innovators" to describe what they do, which in many contexts can be really annoying, but here it fits. It's a roughly three-hour event, with several concurrent presentations. Sometimes I'll go to the presentations, but not that often. Mostly I go to talk to people out in the halls. I do that because it's interesting to talk to people—but also because it's really hard for me. I find one-on-one or one-on-few conversations to be incredibly difficult, much harder than getting on stage in front of many. So I sometimes go to Venture Cafe to force myself to do that.
I hadn't gone for a few months, in spite of their being a monthly satellite Venture Cafe at the Danforth Plant Science Center which is walking distance from where I live. I want to go, but I don't want to go. It's easy to find excuses not to go. It's too sunny. It's too cloudy. &c.
Last week I figured out a way to hack that: the Venture Cafe organizers were looking for volunteers, especially for this week when they had a regular Tuesday Venture Cafe plus one at the Danforth plus hosting representatives from eight other Venture Cafe organizations around the world. Perfect. I jumped on that—for both events. I got to help the organization and talk to tons of people at check-in and as a bartender. Win-win.
I can't believe I never thought of that before. It's like playing a trick on myself. Whatever hangups I might have about talking to people are irrelevant because it's more or less my job to talk to them. And it doesn't matter if I get self-conscious about having to say anything about myself or what I do or what I work on because while I'm there as a volunteer what I do and what I work on is the event. It's a ludicrously simple trick. I'm sure it has analogues elsewhere. Don't step into the water slowly—dive right into it.
On Thursday, 11 October 2018, Nick Hague and Алексей Овчинин (Alexey Ovchinin) blasted off from Kazakhstan on a routine launch to the International Space Station.
It's dangerous, but routine, more are less. Stuff a few humans into a capsule on the top of a Death Machine that turns sparks and liquid combustibles into fire and Δp, and it's completely normal. Right? It's like getting in your car and taking a trip to the deli, except that gas is $2.95M per gallon and you have to use a new car next time.
But this time it wasn't so routine. Somewhere somehow their rocket glitched and they had to abort on the uphill part of the launch.
I think it's interesting when rockets go bad (so long as any with human payloads don't produce casualties) because at Orbital I worked on flight termination systems (blow the rocket up when it goes bad) and launch abort systems (the rocket on top of the rocket that pulls the crew capsule away from a rocket when it goes bad). It's fascinating. Nominally, they're both systems you don't want to use. But when you have to use them you want them to work.
That's not completely relevant to this flight because it wasn't a termination or rocket-powered abort, but close enough. At 2:45 they jettison the launch abort system, at 3:20 you can hear the alarm go off when they drop the first stage. So when they aborted the mission on this flight, there was no launch abort system to rip them off the rocket with big g forces. Instead they separated the capsule from the top, then went up and then down like a lawn dart, to be retrieved somewhere in the steppes of Kazakhstan.
So much for all that. That wasn't even the point of this post, really.
A native Iowan but otherwise good person, Ben Brockert, said this:
If the Soyuz on station lands on schedule, the ISS will be uncrewed for the first time in nearly two decades. Question is: does that matter? Is it important that humans live continuously in space if you assume that their primary activity is just keeping humans alive in space?
ISS has been occupied since November 2000. The last pressurized module was added in 2011. If there was a glitch or an abandonment earlier, in the first decade, say, that would be a bigger deal. It would signify that perhaps the project couldn't be accomplished.
But the whole system worked. The thing speaks for itself. By whole system I don't just mean the design and fabrication of hardware, but also the transportation of modules to space, construction and integration, working with frenemies to do it, keeping the orbital supply chain going as new systems came online and old systems were put out to pasture. It was, and is, an incredible feat.
I didn't answer the questions. I think I should answer the questions.
#1: Does it matter? No.
#2: Is it important that humans live continuously in space if you assume that their primary activity is just keeping humans alive in space? No.
Caveat: Assuming the abandonment is temporary.
If the current occupants leave—and they will have to leave eventually, new ride or not, because the propellant stores on the backup Soyuz have a finite lifetime—and if the occasion of their leaving results in a retrenchment in the ISS program... and the Station is truly abandoned... and human spaceflight is abandoned, in the US as least, until name-your-billionaire starts running amusement flights, which will be at altitudes far below Station anyway... and we're just waiting for a commercial with entity one eye on the quarterly report to step up to the plate... The future doesn't look promising, as far as human spaceflight goes.
I don't know what the feeling that accompanies that thought is. I don't think it qualifies as sadness. Disappointment, maybe. A little bit of frustration. Growing up with an interest in spaceflight as a—goal? desire? wish? dream?—it leaves a hole in my heart to think that we could stop reaching Out There. I don't care that much about Station itself, but I do care about the futures that it implies, the futures in which humans take a step away from Earth, and a step, and a leap, and then go so far that we look back on our past selves as the slow and underperforming children we were.
Kurt Vonnegut has had an outsized influence on my life for someone I never met. I don't want to go into it here. But I do want to rip off a few lines from him, from Fates Worse Than Death. It seems appropriate enough:
If flying-saucer creatures or angels or whatever were to come here in a hundred years, say, and find us gone like the dinosaurs, what might be a good message for humanity to leave for them, maybe carved in great big letters on a Grand Canyon wall? Here is this old poop's suggestion: WE PROBABLY COULD HAVE SAVED OURSELVES, BUT WERE TOO DAMNED LAZY TO TRY VERY HARD...AND TOO DAMNED CHEAP.
It's not a hard-math or hard-science journal article, rather just an Old Systems Analyst explaining the casual brutality of how high-level objectives are typically very vague (think: what is the objective of a nation?) but are necessary on some level in order to derive lower-level objectives (think: what kind of defense systems does a nation require?).
As systems engineers, one of our key jobs is to figure out what the hell it is that a stakeholder wants. Stakeholders know quite a bit about what they want, but not everything, and some of the things they think they want they can't put into words, or one thing conflicts with another, etc. So part of the job is helping the stakeholder figure out what the stakeholder wants, which involves some understanding of what the stakeholder's stakeholders want, and so on. (Never mind that I decided to skip an important part of the definition: who or what are the stakeholders—a major unasked question will arrive with its own answers anyway, later, at a more inconvenient time in the life cycle when you thought you were done.)
The obvious thing that system developers try to do is just receive the objectives from on high, Moses-style. Hitch explains three reasons that doesn't work:
Impossible to define appropriate objectives without knowing about the feasibility and cost of achieving them, which is derived from the analysis itself
High-level objectives tend be non-existent or so vague or literary as to be non-operational.
Objectives are multiple and conflicting, and alternative means of satisfying any one are likely to produce substantial and differential spillover effects on others.
So what does the analyst do? If he can't find anyone to give him acceptable objectives, where does he obtain them? The only answer I have is that learning about objectives is one of the chief objects of this kind of analysis. We must learn to look at objectives as critically and as professionally as we look at our models and our other inputs. We may, of course, begin with tentative objectives, but we must expect to modify or replace them as we learn about the systems we are studying -- and related systems. The feedback on objectives may in some cases be the most important result of our study.
It's hard work to figure out what the point of a system is. But it's the most important work. It's a fork in the road you can't come back to later.
Last Friday, I went to an event called Data for Good, hosted by Washington University's Olin Business School. Here are a few notes from the event...
The most interesting new thing I heard was about the St. Louis Vacancy Collaborative. In short: a group of people started working on a web portal at OpenSTL's 2017 hackathon to use data posted publicly by the city of St. Louis. They did this without permission—my kind of people—using data that was there, then provided the useful results to the city. There's a better description here from STLPR: Vacancy Portal opens door to data on abandoned parcels in St. Louis. Of course, the thing itself is interesting, but even more interesting is the thought that comes with it: there are opportunities to do useful work just laying around out there waiting to be discovered, and you don't have to be picked to do the work—you just decide to do the work. (See also: Seth Godin's latest Akimbo podcast: You're It.)
In the next panel, someone—I think it was Philip Bane of the Smart Cities Council—referred to wicked problems in designing solutions to social problems. Wicked problems are one of those terms that get thrown about without much thought. The term originates here, in a paper you should read if you care at all about solving difficult, intertwined, impossible-to-optimize-for-everything problems: Rittel, Horst W. J.; Webber, Melvin M. (1973). "Dilemmas in a General Theory of Planning". Policy Sciences. 4: 155–169. (doi: 10.1007/bf01405730, pdf). The thing deserves its own post. In the meantime, here are some notes about it.
At the end of the day, Jake Porway of DataKind gave the keynote presentation. Here are a few recommended resources from his talk:
I listened into an MIT Sloan Management Review webinar this morning, Planning for the Human-Digital Workforce, with Mary Lacity. I like to learn more about automation or augmentation or the general idea of What Happens Next when it comes to humans and computers, or humans vs. computers, or however you want to look at it. It's going to happen. It has happened. I do it myself, although in a really unsophisticated way. It's an interesting and anxious time.
Anyway. Here are a few notes from the presentation:
Robotic Process Automation: structured data; rules-based processes; deterministic outcomes
Cognitive Automation: structured and unstructured data; inference-based processes; probabilistic outcomes
So my wife challenged me to a drawing competition.
I don't know why. Maybe she was concerned that I had developed too much self esteem recently. There's a cure for everything these days.
She had already challenged her parents on WeChat (and won), so I really had to grounds for holding out.
Judge not lest ye be &c.
See, I was trying to go for the I-can't-compete-on-skill-so-maybe-I-can-do-something-interesting-with-minimal-effort angle. Make the lines quick. Decisive. Get at the essence of the bird. The inner bird.
Kind of ended up with the angry chicken look in the end. As my mother-in-law put it: 不是个好鸟 (not a good bird).