Recent changes to this wiki:

diff --git a/blog/Good_riddance_netctl.mdwn b/blog/Good_riddance_netctl.mdwn
index 2a61a50..f45570c 100644
--- a/blog/Good_riddance_netctl.mdwn
+++ b/blog/Good_riddance_netctl.mdwn
@@ -69,3 +69,6 @@ Update: [Guide for the Raspberry PI on Archlinux Arm (alarm)](http://archpi.daba
 
 Acknowledgements: [WonderWoofy on the the Archlinux
 forums](https://bbs.archlinux.org/viewtopic.php?pid=1393759#p1393759)
+
+
+What would be the solution, if there are two wireless interfaces wlan0 and wlan1? Should I use two configurations /etc/systemd/network/wlan0.network, /etc/systemd/network/wlan1.network, /etc/wpa_supplicant/etc/wpa_supplicant@wlan0.conf and /etc/wpa_supplicant@wlan1.conf and then also start two wpa_supplicant.services (wpa_supplicant@wlan0.service and wpa_supplicant@wlan0.service)? Or is it possible to handle two wireless interfaces with one configuration and one service?

New tip
diff --git a/e/04063.mdwn b/e/04063.mdwn
new file mode 100644
index 0000000..fac417f
--- /dev/null
+++ b/e/04063.mdwn
@@ -0,0 +1,43 @@
+[[!meta title="Promise versus async/await"]]
+
+# Promise
+
+	var contrived = { count: Math.floor(Math.random() * 10) + 1   }
+	console.log(contrived)
+
+	let myFirstPromise = new Promise((resolve, reject) => {
+	if (contrived.count > 0) {
+		fetch("https://httpbin.org/ip")
+		.then(function (res) { return res.json() })
+		.then((json) => { Object.assign(contrived, json)
+						  resolve(contrived) })
+	} else {
+		console.log("here")
+		resolve(contrived)
+	}
+	})
+
+	myFirstPromise.then(console.log)
+
+# Async / Await
+
+	const outerContrived = {
+	  count: Math.floor(Math.random() * 10) + 1,
+	}
+	console.log('Before', outerContrived)
+
+	const goget = async function(contrived) {
+	  console.log('in goget', contrived)
+
+	  if (contrived.count > 0) {
+		const response = await fetch('https://httpbin.org/ip')
+		const json = await response.json()
+		Object.assign(contrived, json)
+	  } else {
+		console.log(contrived.count)
+	  }
+	}
+
+	goget(outerContrived)
+
+	console.log('After', outerContrived)

Modern update
diff --git a/u/record_wav.mdwn b/u/record_wav.mdwn
index ac30234..87d4262 100644
--- a/u/record_wav.mdwn
+++ b/u/record_wav.mdwn
@@ -1,20 +1,6 @@
-[[!meta title="Howto make a voice recording in Debian"]]
+[[!meta title="Howto make a voice recording"]]
 
-I did this for the [stackoverflow podcast](http://www.stackoverflow.com/).
+	ffmpeg -f pulse -i default test.wav
+	ffmpeg -i test.wav test.mp3
 
-Ensure your sound devices are setup with `alsamixer -V capture`.
 
-Record with `arecord -vv -fdat stackoverflow.wav`.
-
-Ensure the WAV file is less than 90 seconds with good old [sox](http://packages.qa.debian.org/s/sox.html) with `sox stackoverflow.wav -e stat`.
-
-Playback like `aplay stackoverflow.wav`
-
-Encode to MP3 with `lame stackoverflow.wav` from [[Debian_Multimedia|e/01126]]. Or with the better free codec [OGG](http://en.wikipedia.org/wiki/Ogg) with `oggenc stackoverflow.wav`.
-
-You'll want to do this to save space:
-
-	x61:~% ll stackoverflow.*
-	-rw-r--r-- 1 hendry hendry 1.6M 2008-04-19 18:18 stackoverflow.ogg
-	-rw-r--r-- 1 hendry hendry  25M 2008-04-19 17:58 stackoverflow.wav
-	-rw-r--r-- 1 hendry hendry 2.1M 2008-04-19 18:13 stackoverflow.wav.mp3

Driver
diff --git a/blog/Webkit_on_Rpi2.mdwn b/blog/Webkit_on_Rpi2.mdwn
index 1d391d8..faf9a05 100644
--- a/blog/Webkit_on_Rpi2.mdwn
+++ b/blog/Webkit_on_Rpi2.mdwn
@@ -1,3 +1,5 @@
+UPDATE 2017-04-02: <https://twitter.com/anholt/status/840753745721937920> there might be a working driver now...
+
 UPDATE: Latest OpenGL work status seems to be here, <https://dri.freedesktop.org/wiki/VC4/> which is **Required** for Webkit2.
 
 Since creating a [Webconverger Rpi2 port](https://webconverger.org/rpi2/) based

New tip
diff --git a/e/01179.mdwn b/e/01179.mdwn
new file mode 100644
index 0000000..4de5825
--- /dev/null
+++ b/e/01179.mdwn
@@ -0,0 +1,5 @@
+[[!meta title="rsync images to USB key"]]
+
+	rsync -rtP --prune-empty-dirs --dry-run --include="*/" --include='*.JPG' --exclude='*' 2017-02-1* /mnt/sda1/
+
+Many thanks to BasketCase on #rsync IRC

Document
diff --git a/e/04062.mdwn b/e/04062.mdwn
index e4d03fe..8b4ddd7 100644
--- a/e/04062.mdwn
+++ b/e/04062.mdwn
@@ -129,3 +129,25 @@ Correct is:
 
 Remember, then takes a second argument, e.g. `p.then(onResolved, onRejected)`,
 which fits perfectly with the errback signature `callback(err)`!
+
+# More short hand action
+
+foo.js
+
+	function foo (data) {
+		return new Promise((resolve, reject) => {
+		  if (data.sdadas.dasdsada) { resolve(2) }
+		  resolve(1)
+		})
+	}
+	module.exports = foo
+
+main.js:
+
+	const foo = require('./foo.js')
+
+	foo()
+	//.then((data) => { console.log(data) })
+	.then(console.log)
+	// .catch((e) => { console.log("oh dear", e) } )
+	.catch(console.log)

notes
diff --git a/e/04062.mdwn b/e/04062.mdwn
index 4467a4a..e4d03fe 100644
--- a/e/04062.mdwn
+++ b/e/04062.mdwn
@@ -49,7 +49,7 @@ Or event just:
 	  return Promise.resolve().then(() => data.Item || data)
 	}
 
-# const isn't that bad
+# [prefer const](https://github.com/feross/standard/issues/523)
 
 From Tim Oxley: seems a shame to lose benefits of const due to conditional
 initialisation. Instead of if/else + let I generally go for const + || in cases
@@ -65,3 +65,67 @@ like the above, or a ternary :
 	)
 	? valueIfTrue
 	: valueIfFalse
+
+# Single line arrows
+
+for single line arrows, you don't need the curlies or the return:
+
+	const a = () => { return get(uuidgen) }
+	// is equivalent to:
+	const b = () => get(uuidgen)
+
+i.e. right hand side of the arrow can be a block or an expression
+
+
+# Promise refactoring
+
+	return Promise.resolve().then(() => { return })
+
+is the same as:
+
+	return Promise.resolve()
+
+Remember with promise you can resolve to either a promise or a value. if you
+resolve to a promise, the resolved value of the current promise will become the
+value resolved from the returned/resolved promise.
+
+These are (mostly) equivalent, and will both resolve to 3
+
+	Promise.resolve().then(() => {
+	  return 3
+	})
+
+	Promise.resolve().then(() => {
+	  return Promise.resolve(3)
+	})
+
+These are also (mostly) equivalent:
+
+	Promise.resolve()
+
+	Promise.resolve().then(() => {})
+
+	Promise.resolve().then(() => { return })
+
+	Promise.resolve().then(() => {
+	  return Promise.resolve(undefined)
+	})
+
+# Two callbacks are bad!
+
+	function cb () {
+	  console.log('called');
+	  throw new Error('bad')
+	}
+
+	Promise.resolve()
+	.then(() => cb(null))
+	.catch(cb)
+
+Correct is:
+
+	Promise.resolve()
+	.then(() => cb(null), cb)
+
+Remember, then takes a second argument, e.g. `p.then(onResolved, onRejected)`,
+which fits perfectly with the errback signature `callback(err)`!

Some notes
diff --git a/e/04062.mdwn b/e/04062.mdwn
new file mode 100644
index 0000000..4467a4a
--- /dev/null
+++ b/e/04062.mdwn
@@ -0,0 +1,67 @@
+[[!meta title="Javascript Promises"]]
+
+[Promises FAQ](https://gist.github.com/joepie91/4c3a10629a4263a522e3bc4839a28c83#12-how-do-i-access-previous-results-from-the-promise-chain)
+
+[Promises with AWS](https://github.com/kaihendry/lambda-promises/)
+
+# How do i implement a reject on a thrown error?
+
+	return Promise.resolve().then(() => {
+	  if (!isGood) throw new Error('a wrench')
+	  return doneStuff
+	})
+
+# Refactoring example
+
+Bad:
+
+	function foo(data) {
+		return new Promise((resolve, reject) => {
+			console.log("debug data", data)
+			let video = {}
+			if (data.Item) {
+				video = data.Item
+			} else {
+				video = data
+			}
+			console.log("debug video", video)
+			resolve(video)
+		})
+	}
+
+	foo({ id: 12, title: "Back to the future"})
+	.then((output) => console.log(output))
+
+Really bad (i.e. non-working) when using `const` btw!
+
+Good:
+
+	function foo (data) {
+		return Promise.resolve().then(() => {
+			console.log("debug data", data)
+			return data.Item || data
+		})
+	}
+
+Or event just:
+
+	function foo (data) {
+	  return Promise.resolve().then(() => data.Item || data)
+	}
+
+# const isn't that bad
+
+From Tim Oxley: seems a shame to lose benefits of const due to conditional
+initialisation. Instead of if/else + let I generally go for const + || in cases
+like the above, or a ternary :
+
+	// ternary
+	const thing = someCondition ? valueIfTrue : valueIfFalse
+	// multi-line conditional
+	const thing = (
+	  multi &&
+	  line &&
+	  conditional
+	)
+	? valueIfTrue
+	: valueIfFalse

New tip
diff --git a/e/15010.mdwn b/e/15010.mdwn
new file mode 100644
index 0000000..dcfc593
--- /dev/null
+++ b/e/15010.mdwn
@@ -0,0 +1,5 @@
+[[!meta title="Join words in a template by comma"]]
+
+Not working <https://play.golang.org/p/FAR__vMaEL>
+
+Working <https://play.golang.org/p/1V5ii0e8TC>

Another tip
diff --git a/e/04061.mdwn b/e/04061.mdwn
index 858a7d0..8450788 100644
--- a/e/04061.mdwn
+++ b/e/04061.mdwn
@@ -23,11 +23,22 @@ As used in this [Vue2 example](https://jsfiddle.net/kaihendry/8uqrkqg1/)
 If you ever find yourself writing code that looks like `var that = this` using
 arrow functions gets rid of that problem!
 
+However if that still doesn't work, you might need to bind `this` back in, like so:
+
+	fetch(url, obj)
+		.then(function(res) {
+				return res.json();
+				})
+	.then(function(resJson) {
+			this.video_options = resJson
+			}.bind(this)).catch(function (err) {
+			console.error(err);
+			});
+
 # Promises
 
 I find constructing Promises still a little tricky, but note that the Fetch API supports them natively.
 
 * <https://developers.google.com/web/updates/2015/03/introduction-to-fetch>
 
-Way better than `XMLHttpRequest` callback hell, especially for catching errors
-/ writing robust code in poor networking situations which is all too common.
+Way better than `XMLHttpRequest` callback hell, especially for catching errors / writing robust code in poor networking situations which is all too common.

More so
diff --git a/e/04061.mdwn b/e/04061.mdwn
index 4f3fb3c..858a7d0 100644
--- a/e/04061.mdwn
+++ b/e/04061.mdwn
@@ -12,6 +12,12 @@ Can be written like this:
 
 	return this.items.find((item) => item.uuid === this.$route.params.uuid)
 
+If you need to several items, use [filter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter):
+
+	return this.items.filter(item => item.status === status)
+
+As used in this [Vue2 example](https://jsfiddle.net/kaihendry/8uqrkqg1/)
+
 # Arrow functions
 
 If you ever find yourself writing code that looks like `var that = this` using
@@ -23,4 +29,5 @@ I find constructing Promises still a little tricky, but note that the Fetch API
 
 * <https://developers.google.com/web/updates/2015/03/introduction-to-fetch>
 
-Way better than `XMLHttpRequest` callback hell, especially for catching errors.
+Way better than `XMLHttpRequest` callback hell, especially for catching errors
+/ writing robust code in poor networking situations which is all too common.

Fixes
diff --git a/e/04061.mdwn b/e/04061.mdwn
index 0f8c3d6..4f3fb3c 100644
--- a/e/04061.mdwn
+++ b/e/04061.mdwn
@@ -19,9 +19,8 @@ arrow functions gets rid of that problem!
 
 # Promises
 
-I find contructing Promises still a little tricky, but note that the Fetch API supports them natively.
+I find constructing Promises still a little tricky, but note that the Fetch API supports them natively.
 
-* https://developers.google.com/web/updates/2015/03/introduction-to-fetch
-
-Way better than `XMLHttpRequest` callback hell!
+* <https://developers.google.com/web/updates/2015/03/introduction-to-fetch>
 
+Way better than `XMLHttpRequest` callback hell, especially for catching errors.

JS tips
diff --git a/e/04061.mdwn b/e/04061.mdwn
new file mode 100644
index 0000000..0f8c3d6
--- /dev/null
+++ b/e/04061.mdwn
@@ -0,0 +1,27 @@
+[[!meta title="Javascript ES6 refactoring"]]
+
+# Find loop
+
+	for (var i = 0; i < this.items.length; i++) {
+		   if (this.$route.params.uuid == this.items[i].uuid) {
+				   return this.items[i]
+		   }
+	}
+
+Can be written like this:
+
+	return this.items.find((item) => item.uuid === this.$route.params.uuid)
+
+# Arrow functions
+
+If you ever find yourself writing code that looks like `var that = this` using
+arrow functions gets rid of that problem!
+
+# Promises
+
+I find contructing Promises still a little tricky, but note that the Fetch API supports them natively.
+
+* https://developers.google.com/web/updates/2015/03/introduction-to-fetch
+
+Way better than `XMLHttpRequest` callback hell!
+
diff --git a/templates/page.tmpl b/templates/page.tmpl
index a919ae2..8e7a618 100644
--- a/templates/page.tmpl
+++ b/templates/page.tmpl
@@ -226,7 +226,7 @@ Last edited <TMPL_VAR MTIME>
 
 <fieldset style='margin: 2em; font-family "Helvetica Neue Thin, sans-serif";' class=feedback>
 <legend>Feedback</legend>
-<form onsubmit="return feedback(this);" style="margin: 1em;" method="post">
+<form onsubmit="return feedback(this);" style="margin: 1em;">
 
 <p class=field>
 <label for=from class=fieldname>

Using AWS API Gateway + Lambda instead
diff --git a/templates/page.tmpl b/templates/page.tmpl
index 88e23ab..a919ae2 100644
--- a/templates/page.tmpl
+++ b/templates/page.tmpl
@@ -226,7 +226,7 @@ Last edited <TMPL_VAR MTIME>
 
 <fieldset style='margin: 2em; font-family "Helvetica Neue Thin, sans-serif";' class=feedback>
 <legend>Feedback</legend>
-<form onsubmit="return feedback(this);" style="margin: 1em;" action="https://feedback.dabase.com/feedback/feedback.php" method="post">
+<form onsubmit="return feedback(this);" style="margin: 1em;" method="post">
 
 <p class=field>
 <label for=from class=fieldname>
@@ -248,7 +248,36 @@ Your feedback
 <p style="font-size: x-small; display: inline;">Powered by <a href="https://github.com/kaihendry/vanilla-php-feedback-form">Vanilla PHP feedback form</a></p>
 
 </form>
-<script src=http://feedback.dabase.com/feedback/feedback.js></script>
+<script>
+function feedback(feedbackform) {
+
+	const formData = {};
+
+	for (let input of feedbackform) {
+		if (input.name && input.value) {
+			formData[input.name] = input.value;
+		}
+	}
+
+	feedbackform.send.value = "Sending...";
+	feedbackform.send.disabled = true;
+
+	fetch("https://eb1tv85d00.execute-api.ap-southeast-1.amazonaws.com/prod", { method: "POST", body: JSON.stringify(formData) }).then(function(res){
+		if (res.ok) {
+			console.log(res);
+			feedbackform.send.value = "Sent!";
+		} else {
+			console.log("error", res);
+			feedbackform.send.value = "Error, try again";
+			feedbackform.send.disabled = false;
+		}
+	});
+
+	return false;
+}
+
+
+</script>
 </fieldset>
 
 <script>

Link video
diff --git a/blog/AWS_ECS_Workflow.mdwn b/blog/AWS_ECS_Workflow.mdwn
index 89bef0c..6b94272 100644
--- a/blog/AWS_ECS_Workflow.mdwn
+++ b/blog/AWS_ECS_Workflow.mdwn
@@ -1,5 +1,7 @@
 Following up after [[ECS_questions]]
 
+<iframe width="560" height="315" src="https://www.youtube.com/embed/onTnyvrHggo" frameborder="0" allowfullscreen></iframe>
+
 # docker-compose.yml example
 
 	version: '2'
@@ -21,11 +23,20 @@ Assuming you have the `docker-compose.yml` ready and `ecs-cli configure`d,
 start up the cluster of a EC2 instance running the service like so:
 
 	ecs-cli up --capability-iam --keypair example_sysadmin
-	ecs-cli compose service up
 	ecs-cli compose service ps
 
 # [Setting up the load balancer is manual](https://github.com/aws/amazon-ecs-cli/issues/1#issuecomment-235465247)
 
+Create the service by attaching the load balancer to the task
+
+	export CLUSTER_NAME=default
+
+	aws --region ap-southeast-1 ecs create-service --service-name "ecscompose-service-count" \
+		--cluster "$CLUSTER_NAME" \
+		--task-definition "ecscompose-count" \
+		--load-balancers "loadBalancerName=ecs-count,containerName=web,containerPort=9000" \
+		--desired-count 1 --deployment-configuration "maximumPercent=100,minimumHealthyPercent=50" --role ecsServiceRole
+
 * Make it's in the VPC of the created ECS (look at tags)
 * Make sure the security groups are permissive
 * Make sure the health check is /, not `/index.html`
@@ -104,9 +115,9 @@ No way to terminate last EC2 Instance in cluster:
 
 1. `ecs-cli up --capability-iam --keypair spuul_sysadmin`
 2. `ecs-cli compose service up` to create service
-2. Find VPC name <http://s.natalian.org/2016-08-02/1470124829_2558x1404.png
+2. Find VPC name <http://s.natalian.org/2016-08-02/1470124829_2558x1404.png>
 >
-3. Create ELB <http://s.natalian.org/2016-08-02/1470125741_2558x1404.png
+3. Create ELB <http://s.natalian.org/2016-08-02/1470125741_2558x1404.png>
 >
 4. Create service from task and associate ELB
 5. Update Route 53 Failover record with ELB name in case things really go badly

Update template
diff --git a/templates/page.tmpl b/templates/page.tmpl
index c7d78df..88e23ab 100644
--- a/templates/page.tmpl
+++ b/templates/page.tmpl
@@ -230,7 +230,7 @@ Last edited <TMPL_VAR MTIME>
 
 <p class=field>
 <label for=from class=fieldname>
-Your email address (leave blank if you want to be anonymous)
+Your email address (don't worry, I won't share it)
 </label>
 <input style="width: 100%; display: block;" type=email id=from name=from>
 </p>

AWS SDK PHP
diff --git a/blog/AWS_PHP_SDK_v3.mdwn b/blog/AWS_PHP_SDK_v3.mdwn
new file mode 100644
index 0000000..004bdf8
--- /dev/null
+++ b/blog/AWS_PHP_SDK_v3.mdwn
@@ -0,0 +1,48 @@
+Whilst "Dockerizing" a really simple [PHP project to send feedback
+email](https://github.com/kaihendry/vanilla-php-feedback-form), I had the
+unfortunate experience of tustling with the [AWS PHP
+SDK](http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-using-sdk-php.html)
+in
+<https://github.com/kaihendry/vanilla-php-feedback-form/blob/master/feedback/sesmail.php>
+
+
+# Problem 1: Difficult to distinguish between SDK v2 and v3
+
+The only way I've figured this out, is that examples with `SesClient::factory`
+are v2. Otherwise I would expect `use Aws\Ses\SesClient;` to perhaps indicate
+the version.
+
+So this causes a lot of pain because I'm lazy and I'm looking for examples to
+get this working quickly. Unfortunately most of Google results are for v2 and
+don't work!
+
+# Problem 2: The API is really hard to use
+
+Ignoring the fact that the SDK weighs in at 8.2MB, I guess AWS must have some
+automatic mapping to PHP and it really makes it HORRIBLE to use.
+
+So what most people use in some API on top of the SDK or some other cut down
+third party library. Both options are not great.
+
+It almost brings me to years how we have come a simple 1 line invocation to the
+[mail()](composer require aws/aws-sdk-php) 40 lines of code of
+[sesmail.php](https://github.com/kaihendry/vanilla-php-feedback-form/blob/master/feedback/sesmail.php).
+
+Sidenote: [[ssmtp|Mail_from_a_VPS]] is not an option since it's sadly synchronous and slow.
+
+# Problem 3: The AWS PHP SDK documentation sucks
+
+It's not crystal clear what required and what's optional in the [AWS SDK
+documentation for sending an
+email](http://docs.aws.amazon.com/aws-sdk-php/v3/api/api-email-2010-12-01.html#sendemail).
+Where is a minimalistic example? Where is a slightly more realistic sample?
+
+# Problem 4:
+
+With `$result = $SesClient->sendEmail([` you inline an object. How do you
+choose not to have **ReplyToAddresses** because the SDK moans if it's empty!
+
+
+# Conclusion
+
+Nightmare.

Networking tips
diff --git a/blog/AWS_ECS_Workflow.mdwn b/blog/AWS_ECS_Workflow.mdwn
index 42a7d14..89bef0c 100644
--- a/blog/AWS_ECS_Workflow.mdwn
+++ b/blog/AWS_ECS_Workflow.mdwn
@@ -12,7 +12,7 @@ Following up after [[ECS_questions]]
 
 <img src=http://s.natalian.org/2016-07-27/ecs-instance.png alt="ECS instance resources">
 
-Mem limit was figured by taking **Available Memory**, t2.micro is 995 and getting bytes like so: 
+Mem limit was figured by taking **Available Memory**, t2.micro is 995 and getting bytes like so:
 
 	~$ echo $((995 << 20))
 	1043333120
@@ -50,7 +50,7 @@ So you need to terminate the instance manually if you want to save some money.
 
 Typically a 100mb upload
 
-Then bring down the service and up the service like so: http://s.natalian.org/2016-07-27/ecs.txt
+Then bring down the service and up the service like so: <http://s.natalian.org/2016-07-27/ecs.txt>
 
 # Bringing down a service
 
@@ -86,5 +86,27 @@ it works. Some careful orchestration with Health limits.
 
 Upgrading an instance basically means killing it and starting it again. I guess
 that would entail a scale event to launch a new instance. ssh to it to ensure
-it's uptodate. And then decommission the old one by removing it from the load
-balancer.
+it's uptodate (security group permitting). And then decommission the old one by
+removing it from the load balancer.
+
+# Issues
+
+No way to terminate last EC2 Instance in cluster:
+
+	$ ecs-cli scale --size 0 --capability-iam
+	INFO[0002] Waiting for your cluster resources to be updated
+	INFO[0003] Cloudformation stack status                   stackStatus=UPDATE_IN_PROGRESS
+	ERRO[0035] Failure event                                 reason= resourceType=AWS::CloudFormation::Stack
+	ERRO[0035] Error executing 'scale': Cloudformation failure waiting for 'UPDATE_COMPLETE'. State is 'UPDATE_ROLLBACK_COMPLETE'
+
+
+# Steps
+
+1. `ecs-cli up --capability-iam --keypair spuul_sysadmin`
+2. `ecs-cli compose service up` to create service
+2. Find VPC name <http://s.natalian.org/2016-08-02/1470124829_2558x1404.png
+>
+3. Create ELB <http://s.natalian.org/2016-08-02/1470125741_2558x1404.png
+>
+4. Create service from task and associate ELB
+5. Update Route 53 Failover record with ELB name in case things really go badly
diff --git a/e/18002.mdwn b/e/18002.mdwn
new file mode 100644
index 0000000..380235d
--- /dev/null
+++ b/e/18002.mdwn
@@ -0,0 +1,29 @@
+[[!meta title="L2TP tunnel notes"]]
+
+Assuming you have correctly setup the configuration like so:
+
+<https://support.aa.net.uk/L2TP_Client:_Linux>
+
+	journalctl -u xl2tpd -f
+
+Now start `xl2tpd` which does the tunneling.
+
+Then establish the tunnel:
+
+	# echo "c myisp" > /var/run/xl2tpd/l2tp-control
+
+The point to point **ppp0** interface should come up. Now we need to route
+traffic through it, but make sure we don't disrupt the existing network to the
+tunnel. Lets assume we connect to the tunnel by connecting to the IP address of
+2.2.2.2
+
+`enp0s31f6` is the unique name of my wired interface.
+
+	ip route add 2.2.2.2 via 192.168.1.1 dev enp0s31f6
+
+We are telling to get to 2.2.2.2, we need to go through our normal router's route of `192.168.1.1`.
+
+Now for all other traffic, say the IP address you pop out of is `81.81.81.81`,
+we set a new route to say all or "default" traffic gets tunneled through it, like so:
+
+	ip route add default via 81.81.81.81 dev ppp0

ECS stuff
diff --git a/blog/AWS_ECS_Workflow.mdwn b/blog/AWS_ECS_Workflow.mdwn
new file mode 100644
index 0000000..42a7d14
--- /dev/null
+++ b/blog/AWS_ECS_Workflow.mdwn
@@ -0,0 +1,90 @@
+Following up after [[ECS_questions]]
+
+# docker-compose.yml example
+
+	version: '2'
+	services:
+	  web:
+		image: kaihendry/count
+		ports:
+		 - "80:8080"
+		mem_limit: 1043333120
+
+<img src=http://s.natalian.org/2016-07-27/ecs-instance.png alt="ECS instance resources">
+
+Mem limit was figured by taking **Available Memory**, t2.micro is 995 and getting bytes like so: 
+
+	~$ echo $((995 << 20))
+	1043333120
+
+Assuming you have the `docker-compose.yml` ready and `ecs-cli configure`d,
+start up the cluster of a EC2 instance running the service like so:
+
+	ecs-cli up --capability-iam --keypair example_sysadmin
+	ecs-cli compose service up
+	ecs-cli compose service ps
+
+# [Setting up the load balancer is manual](https://github.com/aws/amazon-ecs-cli/issues/1#issuecomment-235465247)
+
+* Make it's in the VPC of the created ECS (look at tags)
+* Make sure the security groups are permissive
+* Make sure the health check is /, not `/index.html`
+* Setup Route 53 to point to ELB (e.g. ecs.example.com)
+* Website requires SSL
+* You do not need to setup ELB in the service, in fact it's a lot easier if you don't since it gets confused by port re-mappings.
+
+# Caveats
+
+`ecs-cli down` will not work until you remove the ELB
+<https://console.aws.amazon.com/support/home?region=us-west-2#/case/?displayId=1820472381&language=en>.
+So you need to terminate the instance manually if you want to save some money.
+
+`ecs-cli up` by default will create a security group that you can't ssh into.
+
+# Developer workflow
+
+	aws ecr get-login --profile example --region us-west-2
+	docker build -t website .
+	docker tag website:latest 111111111.dkr.ecr.us-west-2.amazonaws.com/website:latest
+	docker push 111111111.dkr.ecr.us-west-2.amazonaws.com/website:latest
+
+Typically a 100mb upload
+
+Then bring down the service and up the service like so: http://s.natalian.org/2016-07-27/ecs.txt
+
+# Bringing down a service
+
+	ecs-cli compose service down
+
+Will bring down the particular service. Note that it's conceivable that many
+services run on an instance.
+
+You can kill the entire cluster like `ecs-cli down --force`, but I don't
+recommend that because the VPC will change when you re-establish it, so your
+previously manually set up load balancer will not work.
+
+Current workaround is to manually terminal the EC2 instance from the Web console if I am not requiring it.
+
+# DNS failover
+
+<img src=http://s.natalian.org/2016-07-28/1469675711_2558x701.png>
+
+Notice that "Evaluate Target Health" is enough for the Failover rule to know
+the load balancer is out of action. You do not need a health check!
+
+We fail over to Cloudfront since we require SSL to work.
+
+# Upgrade flow
+
+## Upgrading the container image
+
+Running two instances with the container running on each and then doing a
+`ecs-cli compose service up` should upgrade the service I'm told, but I doubt
+it works. Some careful orchestration with Health limits.
+
+## Upgrading the EC2 instance image
+
+Upgrading an instance basically means killing it and starting it again. I guess
+that would entail a scale event to launch a new instance. ssh to it to ensure
+it's uptodate. And then decommission the old one by removing it from the load
+balancer.

Tweaks
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index 6a4ce34..084555f 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -7,6 +7,8 @@ Manual everything.
 Not scalable at all. Well it might be surprisingly scalable if the code is well
 engineered, since everything is so simple! ;)
 
+Probably never reboots, despite updates (e.g. Linux) requiring it.
+
 # Phase 2
 
 Using a [Configuration
@@ -17,6 +19,9 @@ Scalability achieved with AMI snapshots and such. Zero downtime with a [load
 balancer](https://aws.amazon.com/elasticloadbalancing/) fronting at least two
 instances running your App independently of one another.
 
+Updates are applied before making a new AMI image. Latest AMI images are
+applied haphazardly.
+
 # Phase 3
 
 Using <abbr title="Platform as a Service">PaaS</abbr> like
@@ -29,6 +34,8 @@ tricks](http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/).
 Probably best solution for personal projects or starting out since it's fairly
 simple.
 
+Probably never reboots or applies security updates properly.
+
 # Phase 4
 
 Using Docker & containerizing all the things on something like
@@ -37,7 +44,7 @@ as manual as making sure you can quickly spin up new CoreOS instances & run the
 Docker images with a load balancer in front.
 
 Bonus points if you have moved from Postfix <abbr title="Mail transfer
-agent">MTA</abbr> to a Restful mail API or queue.
+agent">MTA</abbr> to an external Restful mail API or queue.
 
 Bonus points is if one has figured out how to get a <abbr title="Continuous
 Integration">CI</abbr> to build the image and deploy it.
@@ -45,6 +52,9 @@ Integration">CI</abbr> to build the image and deploy it.
 Bonus points for running two instances for Blue/Green deployments behind the
 load balancer for zero downtime.
 
+Serious kudos if you manage to orchestrate your updates with [etcd
+locks](https://coreos.com/os/docs/latest/update-strategies.html) and have no downtime.
+
 # Phase 5
 
 Orchestrating Docker deployments with [Docker
@@ -55,9 +65,9 @@ containers on EC2.
 
 Tip: Getting into ECS like myself? Checkout [[ECS questions]]!
 
-Scales rather well, providing you have [Service Auto Scaling in your
+Scales in a complex way (worry about bother Container & Instance utilisation
+and timings), providing you have [Service Auto Scaling in your
 region](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html).
-Quite complex and heavyweight.
 
 # Phase 6
 
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
index b25b55c..31139e0 100644
--- a/blog/ECS_questions.mdwn
+++ b/blog/ECS_questions.mdwn
@@ -95,3 +95,7 @@ NOTICE **CPUReservation** and **CPUUtilization** alarms.
 NOTICE [EC2 autoscaling](https://us-west-2.console.aws.amazon.com/ec2/autoscaling/home?region=us-west-2#AutoScalingGroups:) & [Service autoscaling](http://s.natalian.org/2016-07-21/1469084154_2558x1404.png) ARE NOT THE SAME!!
 
 <img src=http://s.natalian.org/2016-07-21/1469084164_2558x1404.png alt="ECS alarm">
+
+# What are my Resources? What is my memory limit?
+
+<img src=http://s.natalian.org/2016-07-22/1469173592_2558x1404.png>

How to scale
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
index d1c970a..b25b55c 100644
--- a/blog/ECS_questions.mdwn
+++ b/blog/ECS_questions.mdwn
@@ -84,6 +84,14 @@ My question RE more env variables: <https://github.com/aws/amazon-ecs-agent/issu
 
 <https://aws.amazon.com/blogs/compute/automatic-scaling-with-amazon-ecs/>
 
-Remember [EC2 autoscaling](https://us-west-2.console.aws.amazon.com/ec2/autoscaling/home?region=us-west-2#AutoScalingGroups:) & [Service autoscaling](http://s.natalian.org/2016-07-21/1469084154_2558x1404.png) ARE NOT THE SAME!!
+	Because scaling ECS services is much faster than scaling an ECS cluster (of
+	EC2 instances), we recommend keeping the ECS cluster scaling alarm more
+	responsive than the ECS service alarm.
+
+NOTICE **CPUReservation** and **CPUUtilization** alarms.
+
+<img src=https://s3.amazonaws.com/chrisb/Alarms.png alt="ECS scaling alarms">
+
+NOTICE [EC2 autoscaling](https://us-west-2.console.aws.amazon.com/ec2/autoscaling/home?region=us-west-2#AutoScalingGroups:) & [Service autoscaling](http://s.natalian.org/2016-07-21/1469084154_2558x1404.png) ARE NOT THE SAME!!
 
 <img src=http://s.natalian.org/2016-07-21/1469084164_2558x1404.png alt="ECS alarm">

Link to video
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
index 7c15bd6..d1c970a 100644
--- a/blog/ECS_questions.mdwn
+++ b/blog/ECS_questions.mdwn
@@ -1,6 +1,4 @@
-<a href=http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html>
-<img src=http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/load-balancing.png>
-</a>
+<iframe width="560" height="315" src="https://www.youtube.com/embed/Imeb-_g_CtU" frameborder="0" allowfullscreen></iframe>
 
 # Biggest tip use `ecs-cli`
 
@@ -84,6 +82,8 @@ My question RE more env variables: <https://github.com/aws/amazon-ecs-agent/issu
 
 # How does one actually scale?
 
+<https://aws.amazon.com/blogs/compute/automatic-scaling-with-amazon-ecs/>
+
 Remember [EC2 autoscaling](https://us-west-2.console.aws.amazon.com/ec2/autoscaling/home?region=us-west-2#AutoScalingGroups:) & [Service autoscaling](http://s.natalian.org/2016-07-21/1469084154_2558x1404.png) ARE NOT THE SAME!!
 
 <img src=http://s.natalian.org/2016-07-21/1469084164_2558x1404.png alt="ECS alarm">

Usage log
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
index 5451b15..7c15bd6 100644
--- a/blog/ECS_questions.mdwn
+++ b/blog/ECS_questions.mdwn
@@ -2,6 +2,14 @@
 <img src=http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/load-balancing.png>
 </a>
 
+# Biggest tip use `ecs-cli`
+
+<https://github.com/aws/amazon-ecs-cli>
+
+This helps setup the instances, setup the tasks via `docker-compose.yml` and scale it!!
+
+[Example ecs-cli usage log](http://s.natalian.org/2016-07-21/ecs-service.txt) where I setup a cluster in Oregon for <https://github.com/kaihendry/letterly>.
+
 # With the Amazon AMI who keeps the host machine uptodate?
 
 	~$ ssh ec2-user@54.255.129.57
@@ -29,6 +37,8 @@ You need to ensure you have right EC2 Role for ECS for a start!
 
 Follow <http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html> carefully!
 
+Or just use `ecs-cli up --capability-iam` to avoid these problem space.
+
 # What log driver should I be using to get into Kibana?
 
 * awslogs
@@ -58,7 +68,7 @@ Your health check is probably pointing to port 80 when you should be checking th
 
 # What is the developer work flow?
 
-<https://github.com/aws/amazon-ecs-cli/issues/136>
+<https://github.com/aws/amazon-ecs-cli/issues/136> suggests `ecs-cli compose service up`
 
 I realise one could run **Service Update** to deliver an update. However after
 pushing changes back to Github, how does a developer deploy to the service
@@ -70,4 +80,10 @@ Sharing environment values between clusters seems pretty hard:
 
 <https://github.com/aws/amazon-ecs-agent/issues/347>
 
-My question RE more env variables: <https://github.com/aws/amazon-ecs-agent/issues/456>
+My question RE more env variables: <https://github.com/aws/amazon-ecs-agent/issues/456> & <https://forums.docker.com/t/exposing-image-id-and-hostname-in-the-containers-environment/18634>
+
+# How does one actually scale?
+
+Remember [EC2 autoscaling](https://us-west-2.console.aws.amazon.com/ec2/autoscaling/home?region=us-west-2#AutoScalingGroups:) & [Service autoscaling](http://s.natalian.org/2016-07-21/1469084154_2558x1404.png) ARE NOT THE SAME!!
+
+<img src=http://s.natalian.org/2016-07-21/1469084164_2558x1404.png alt="ECS alarm">

More so
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
index 8c9a036..5451b15 100644
--- a/blog/ECS_questions.mdwn
+++ b/blog/ECS_questions.mdwn
@@ -1,3 +1,7 @@
+<a href=http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html>
+<img src=http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/load-balancing.png>
+</a>
+
 # With the Amazon AMI who keeps the host machine uptodate?
 
 	~$ ssh ec2-user@54.255.129.57
@@ -42,7 +46,6 @@ Unanswered
 
 I thought one good thing about containers is that they can share all the memory of the host!
 
-
 # Port mappings are kinda confusing
 
 You map ELB front end ports to the instance ports effectively.
@@ -53,7 +56,7 @@ You map ELB front end ports to the instance ports effectively.
 
 Your health check is probably pointing to port 80 when you should be checking the port of your container!
 
-## What is the developer work flow?
+# What is the developer work flow?
 
 <https://github.com/aws/amazon-ecs-cli/issues/136>
 
@@ -61,4 +64,10 @@ I realise one could run **Service Update** to deliver an update. However after
 pushing changes back to Github, how does a developer deploy to the service
 themselves?
 
+# Setting environment up
+
+Sharing environment values between clusters seems pretty hard:
+
+<https://github.com/aws/amazon-ecs-agent/issues/347>
 
+My question RE more env variables: <https://github.com/aws/amazon-ecs-agent/issues/456>

Better explanation hopefully
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index 5110619..6a4ce34 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -14,8 +14,8 @@ Management](https://en.wikipedia.org/wiki/Configuration_management) tool like
 Chef, Puppet or Ansible.
 
 Scalability achieved with AMI snapshots and such. Zero downtime with a [load
-balancer](https://aws.amazon.com/elasticloadbalancing/) fronting two instances
-running your App.
+balancer](https://aws.amazon.com/elasticloadbalancing/) fronting at least two
+instances running your App independently of one another.
 
 # Phase 3
 
@@ -36,7 +36,8 @@ Using Docker & containerizing all the things on something like
 as manual as making sure you can quickly spin up new CoreOS instances & run the
 Docker images with a load balancer in front.
 
-Bonus points if you have moved from Postfix MTA to a Restful mail API or queue.
+Bonus points if you have moved from Postfix <abbr title="Mail transfer
+agent">MTA</abbr> to a Restful mail API or queue.
 
 Bonus points is if one has figured out how to get a <abbr title="Continuous
 Integration">CI</abbr> to build the image and deploy it.
@@ -56,7 +57,7 @@ Tip: Getting into ECS like myself? Checkout [[ECS questions]]!
 
 Scales rather well, providing you have [Service Auto Scaling in your
 region](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html).
-Quite complex and heavyweight. 
+Quite complex and heavyweight.
 
 # Phase 6
 

More so so
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index e574ba6..5110619 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -4,7 +4,8 @@ Ignoring the complexities surrounding data.
 
 Manual everything.
 
-Not scalable at all.
+Not scalable at all. Well it might be surprisingly scalable if the code is well
+engineered, since everything is so simple! ;)
 
 # Phase 2
 
@@ -12,8 +13,9 @@ Using a [Configuration
 Management](https://en.wikipedia.org/wiki/Configuration_management) tool like
 Chef, Puppet or Ansible.
 
-Scalability achieved with AMI snapshots and such. Zero downtime with an
-[ELB](https://aws.amazon.com/elasticloadbalancing/).
+Scalability achieved with AMI snapshots and such. Zero downtime with a [load
+balancer](https://aws.amazon.com/elasticloadbalancing/) fronting two instances
+running your App.
 
 # Phase 3
 
@@ -34,6 +36,8 @@ Using Docker & containerizing all the things on something like
 as manual as making sure you can quickly spin up new CoreOS instances & run the
 Docker images with a load balancer in front.
 
+Bonus points if you have moved from Postfix MTA to a Restful mail API or queue.
+
 Bonus points is if one has figured out how to get a <abbr title="Continuous
 Integration">CI</abbr> to build the image and deploy it.
 
@@ -50,7 +54,9 @@ containers on EC2.
 
 Tip: Getting into ECS like myself? Checkout [[ECS questions]]!
 
-Scales rather well. Quite complex and heavyweight.
+Scales rather well, providing you have [Service Auto Scaling in your
+region](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html).
+Quite complex and heavyweight. 
 
 # Phase 6
 

Checkout
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index 8ba1fc6..e574ba6 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -48,6 +48,8 @@ title="Elastic Container Repository">ECR</abbr> (private Docker image hosting)</
 and AWS <abbr title="Elastic Container Service">ECS</abbr> for managing the
 containers on EC2.
 
+Tip: Getting into ECS like myself? Checkout [[ECS questions]]!
+
 Scales rather well. Quite complex and heavyweight.
 
 # Phase 6

Small updates
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
index 5aefbea..8c9a036 100644
--- a/blog/ECS_questions.mdwn
+++ b/blog/ECS_questions.mdwn
@@ -38,8 +38,11 @@ Unanswered
 
 # What does CPU unit and memory supposed to mean?
 
+<img src=http://s.natalian.org/2016-07-20/1468980692_2558x1404.png>
+
 I thought one good thing about containers is that they can share all the memory of the host!
 
+
 # Port mappings are kinda confusing
 
 You map ELB front end ports to the instance ports effectively.
@@ -52,6 +55,10 @@ Your health check is probably pointing to port 80 when you should be checking th
 
 ## What is the developer work flow?
 
+<https://github.com/aws/amazon-ecs-cli/issues/136>
+
 I realise one could run **Service Update** to deliver an update. However after
 pushing changes back to Github, how does a developer deploy to the service
 themselves?
+
+

A question
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
index 46f1c73..5aefbea 100644
--- a/blog/ECS_questions.mdwn
+++ b/blog/ECS_questions.mdwn
@@ -11,7 +11,6 @@
 		   Run "sudo yum update" to apply all updates.
 		   [ec2-user@ip-172-30-0-233 ~]$ docker ps
 
-
 Should I be using CoreOS?
 
 # Tip: to debug look at the events log
@@ -50,3 +49,9 @@ You map ELB front end ports to the instance ports effectively.
 <http://docs.aws.amazon.com/AmazonECS/latest/developerguide/troubleshooting.html#troubleshoot-service-load-balancers>
 
 Your health check is probably pointing to port 80 when you should be checking the port of your container!
+
+## What is the developer work flow?
+
+I realise one could run **Service Update** to deliver an update. However after
+pushing changes back to Github, how does a developer deploy to the service
+themselves?

questions
diff --git a/blog/ECS_questions.mdwn b/blog/ECS_questions.mdwn
new file mode 100644
index 0000000..46f1c73
--- /dev/null
+++ b/blog/ECS_questions.mdwn
@@ -0,0 +1,52 @@
+# With the Amazon AMI who keeps the host machine uptodate?
+
+	~$ ssh ec2-user@54.255.129.57
+
+	   __|  __|  __|
+		  _|  (   \__ \   Amazon ECS-Optimized Amazon Linux AMI 2016.03.e
+		   ____|\___|____/
+
+		   For documentation visit, http://aws.amazon.com/documentation/ecs
+		   No packages needed for security; 3 packages available
+		   Run "sudo yum update" to apply all updates.
+		   [ec2-user@ip-172-30-0-233 ~]$ docker ps
+
+
+Should I be using CoreOS?
+
+# Tip: to debug look at the events log
+
+Here I later found my ELB was in the wrong VPC with "HTTP/1.1 503 Service Unavailable: Back-end server is at capacity".
+
+<http://s.natalian.org/2016-07-19/1468908777_2558x1404.png>
+
+# Tip: Be careful when setting up the EC2
+
+You need to ensure you have right EC2 Role for ECS for a start!
+
+Follow <http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html> carefully!
+
+# What log driver should I be using to get into Kibana?
+
+* awslogs
+* fluentd
+* gelf
+* journald
+* json-file
+* syslog
+
+Unanswered
+
+# What does CPU unit and memory supposed to mean?
+
+I thought one good thing about containers is that they can share all the memory of the host!
+
+# Port mappings are kinda confusing
+
+You map ELB front end ports to the instance ports effectively.
+
+# (service service-name) (instance instance-id) is unhealthy in (elb elb-name) due to (reason Instance has failed at least the UnhealthyThreshold number of health checks consecutively.)
+
+<http://docs.aws.amazon.com/AmazonECS/latest/developerguide/troubleshooting.html#troubleshoot-service-load-balancers>
+
+Your health check is probably pointing to port 80 when you should be checking the port of your container!

Nicer link
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index 5a0d069..8ba1fc6 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -8,7 +8,8 @@ Not scalable at all.
 
 # Phase 2
 
-Using a <https://en.wikipedia.org/wiki/Configuration_management> tool like
+Using a [Configuration
+Management](https://en.wikipedia.org/wiki/Configuration_management) tool like
 Chef, Puppet or Ansible.
 
 Scalability achieved with AMI snapshots and such. Zero downtime with an

Tweaks
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index f65da75..5a0d069 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -16,9 +16,9 @@ Scalability achieved with AMI snapshots and such. Zero downtime with an
 
 # Phase 3
 
-Using PaaS like [Dokku](https://github.com/dokku/dokku), so a developer can
-`git push` to a "[Heroku-ish](https://github.com/gliderlabs/herokuish)"
-endpoint.
+Using <abbr title="Platform as a Service">PaaS</abbr> like
+[Dokku](https://github.com/dokku/dokku), so a developer can `git push` to a
+"[Heroku-ish](https://github.com/gliderlabs/herokuish)" endpoint.
 
 Probably can't scale very well. Zero downtime can be achieved cheaply using
 [some
@@ -30,14 +30,14 @@ simple.
 
 Using Docker & containerizing all the things on something like
 [CoreOS](https://aws.amazon.com/lambda/details/) or RancherOS. Scalability is
-as manual as making sure you can quickly spin up new CoreOS instances with a
-load balancer in front.
+as manual as making sure you can quickly spin up new CoreOS instances & run the
+Docker images with a load balancer in front.
 
 Bonus points is if one has figured out how to get a <abbr title="Continuous
 Integration">CI</abbr> to build the image and deploy it.
 
 Bonus points for running two instances for Blue/Green deployments behind the
-load balancer.
+load balancer for zero downtime.
 
 # Phase 5
 
@@ -53,4 +53,5 @@ Scales rather well. Quite complex and heavyweight.
 
 Serverless computing? [AWS Lambda](https://aws.amazon.com/lambda/details/)
 
-Most apps would have to be completely rewritten.
+Most apps would probably have to be completely rewritten and tied to the
+hosting platform in question.

Fleshing out
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index 1d7d1a4..f65da75 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -11,7 +11,8 @@ Not scalable at all.
 Using a <https://en.wikipedia.org/wiki/Configuration_management> tool like
 Chef, Puppet or Ansible.
 
-Scalability achieved with AMI snapshots and such.
+Scalability achieved with AMI snapshots and such. Zero downtime with an
+[ELB](https://aws.amazon.com/elasticloadbalancing/).
 
 # Phase 3
 
@@ -19,17 +20,25 @@ Using PaaS like [Dokku](https://github.com/dokku/dokku), so a developer can
 `git push` to a "[Heroku-ish](https://github.com/gliderlabs/herokuish)"
 endpoint.
 
-Probably can't scale very well.
+Probably can't scale very well. Zero downtime can be achieved cheaply using
+[some
+tricks](http://dokku.viewdocs.io/dokku/deployment/zero-downtime-deploys/).
+Probably best solution for personal projects or starting out since it's fairly
+simple.
 
 # Phase 4
 
-Using Docker & containerizing all the things on something like CoreOS or
-RancherOS. Scalability is as manual as making sure you can quickly spin up new
-CoreOS instances with a load balancer in front.
+Using Docker & containerizing all the things on something like
+[CoreOS](https://aws.amazon.com/lambda/details/) or RancherOS. Scalability is
+as manual as making sure you can quickly spin up new CoreOS instances with a
+load balancer in front.
 
 Bonus points is if one has figured out how to get a <abbr title="Continuous
 Integration">CI</abbr> to build the image and deploy it.
 
+Bonus points for running two instances for Blue/Green deployments behind the
+load balancer.
+
 # Phase 5
 
 Orchestrating Docker deployments with [Docker
@@ -38,4 +47,10 @@ title="Elastic Container Repository">ECR</abbr> (private Docker image hosting)</
 and AWS <abbr title="Elastic Container Service">ECS</abbr> for managing the
 containers on EC2.
 
-Scales rather well.
+Scales rather well. Quite complex and heavyweight.
+
+# Phase 6
+
+Serverless computing? [AWS Lambda](https://aws.amazon.com/lambda/details/)
+
+Most apps would have to be completely rewritten.

Links
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
index 6303c43..1d7d1a4 100644
--- a/blog/DevOps_evolution.mdwn
+++ b/blog/DevOps_evolution.mdwn
@@ -1,3 +1,5 @@
+Ignoring the complexities surrounding data.
+
 # Phase 1
 
 Manual everything.
@@ -25,13 +27,15 @@ Using Docker & containerizing all the things on something like CoreOS or
 RancherOS. Scalability is as manual as making sure you can quickly spin up new
 CoreOS instances with a load balancer in front.
 
-Bonus points is if one is figured out how to get a CI to build the image and
-deploy it.
+Bonus points is if one has figured out how to get a <abbr title="Continuous
+Integration">CI</abbr> to build the image and deploy it.
 
 # Phase 5
 
 Orchestrating Docker deployments with [Docker
-Compose](https://docs.docker.com/compose/), using things like AWS ECR (private
-Docker image hosting) and AWS ECS for managing the containers on EC2.
+Compose](https://docs.docker.com/compose/), using things like <a href="http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_Console_Repositories.html">AWS <abbr
+title="Elastic Container Repository">ECR</abbr> (private Docker image hosting)</a>
+and AWS <abbr title="Elastic Container Service">ECS</abbr> for managing the
+containers on EC2.
 
 Scales rather well.

Some thoughts
diff --git a/blog/DevOps_evolution.mdwn b/blog/DevOps_evolution.mdwn
new file mode 100644
index 0000000..6303c43
--- /dev/null
+++ b/blog/DevOps_evolution.mdwn
@@ -0,0 +1,37 @@
+# Phase 1
+
+Manual everything.
+
+Not scalable at all.
+
+# Phase 2
+
+Using a <https://en.wikipedia.org/wiki/Configuration_management> tool like
+Chef, Puppet or Ansible.
+
+Scalability achieved with AMI snapshots and such.
+
+# Phase 3
+
+Using PaaS like [Dokku](https://github.com/dokku/dokku), so a developer can
+`git push` to a "[Heroku-ish](https://github.com/gliderlabs/herokuish)"
+endpoint.
+
+Probably can't scale very well.
+
+# Phase 4
+
+Using Docker & containerizing all the things on something like CoreOS or
+RancherOS. Scalability is as manual as making sure you can quickly spin up new
+CoreOS instances with a load balancer in front.
+
+Bonus points is if one is figured out how to get a CI to build the image and
+deploy it.
+
+# Phase 5
+
+Orchestrating Docker deployments with [Docker
+Compose](https://docs.docker.com/compose/), using things like AWS ECR (private
+Docker image hosting) and AWS ECS for managing the containers on EC2.
+
+Scales rather well.

rules !
diff --git a/blog/difference_between_docker_and_vm.mdwn b/blog/difference_between_docker_and_vm.mdwn
index cc6b67a..c77091f 100644
--- a/blog/difference_between_docker_and_vm.mdwn
+++ b/blog/difference_between_docker_and_vm.mdwn
@@ -23,7 +23,7 @@ who know the software and dependencies best.
 
 Docker defines a
 [Dockerfile](https://docs.docker.com/engine/reference/builder/) which like
-`debian/build` or Arch's `PKGBUILD` is a very succinct way of describing how a
+`debian/rules` or Arch's `PKGBUILD` is a very succinct way of describing how a
 service is packaged and furthermore deployed in a network and run.
 
 Docker has a layered filesystem to make it easy for developers to isolate

More tweaks
diff --git a/blog/difference_between_docker_and_vm.mdwn b/blog/difference_between_docker_and_vm.mdwn
index 1f8b374..cc6b67a 100644
--- a/blog/difference_between_docker_and_vm.mdwn
+++ b/blog/difference_between_docker_and_vm.mdwn
@@ -7,13 +7,15 @@ LXC](https://en.wikipedia.org/wiki/LXC).
 Ideally Docker is running on a "bare metal" machine since it does not need to
 be virtualised. It runs a lot faster, but admittedly you probably won't see the
 performance boon if you are running on a already virtualised VPS like EC2 or
-MacOS.
+MacOS. A lightweight Linux OS like [CoreOS](https://coreos.com/) or
+[RancherOS](http://rancher.com/rancher-os/) makes more sense for running Docker
+containers.
 
 Docker runs whole services in containers which are isolated and can be
 controlled much like a Virtual machine, but without any complex hardware
 abstractions.
 
-But Docker is much more. For example the <https://hub.docker.com/> is a public
+But Docker is much more; For example the <https://hub.docker.com/> is a public
 repository where you can almost pick a pre-**packaged** service off the shelf.
 Instead of running Nginx packaged by your distribution for example, you can run
 the [official nginx Docker image from Nginx](https://hub.docker.com/_/nginx/),
@@ -25,11 +27,11 @@ Docker defines a
 service is packaged and furthermore deployed in a network and run.
 
 Docker has a layered filesystem to make it easy for developers to isolate
-changes and to iterate quickly. You can easily keep uptodate or roll back and
-forth atomically using image tags.
+changes and to iterate quickly. You can painlessly keep current or roll back
+and forth atomically using image tags.
 
 Docker has a raft of handy tools in their popular ecosystem to help network
 containers and manage filesystem mounts between the container and host.
 [Compose](https://docs.docker.com/compose/) is a advanced tool with a
-definition language for managing a whole formation. A complex network of
-services can be launched and maintained using it.
+definition language for managing a whole formation of Cloud services. A complex
+network of services can be launched and maintained using it.

Corrections
diff --git a/blog/difference_between_docker_and_vm.mdwn b/blog/difference_between_docker_and_vm.mdwn
index 0b0feb3..1f8b374 100644
--- a/blog/difference_between_docker_and_vm.mdwn
+++ b/blog/difference_between_docker_and_vm.mdwn
@@ -1,16 +1,17 @@
-[[!meta title="Whats the difference between a Docker container and a virtual machine?" ]]
+[[!meta title="Whats the difference between a Docker container and a Virtual machine?" ]]
 
 Docker is much **faster** and more productive to work with than a Virtual
 machine. It's basically a front end on [**Linux containers** aka
 LXC](https://en.wikipedia.org/wiki/LXC).
 
 Ideally Docker is running on a "bare metal" machine since it does not need to
-virtualised. It runs a lot faster, but admittedly you probably won't see the
+be virtualised. It runs a lot faster, but admittedly you probably won't see the
 performance boon if you are running on a already virtualised VPS like EC2 or
 MacOS.
 
 Docker runs whole services in containers which are isolated and can be
-controlled much like a Virtual machine.
+controlled much like a Virtual machine, but without any complex hardware
+abstractions.
 
 But Docker is much more. For example the <https://hub.docker.com/> is a public
 repository where you can almost pick a pre-**packaged** service off the shelf.
@@ -28,7 +29,7 @@ changes and to iterate quickly. You can easily keep uptodate or roll back and
 forth atomically using image tags.
 
 Docker has a raft of handy tools in their popular ecosystem to help network
-containers and manage mounts between the container and host.
+containers and manage filesystem mounts between the container and host.
 [Compose](https://docs.docker.com/compose/) is a advanced tool with a
 definition language for managing a whole formation. A complex network of
 services can be launched and maintained using it.

filename fix
diff --git a/blog/difference_between_docker_and_vm.mdwn b/blog/difference_between_docker_and_vm.mdwn
new file mode 100644
index 0000000..0b0feb3
--- /dev/null
+++ b/blog/difference_between_docker_and_vm.mdwn
@@ -0,0 +1,34 @@
+[[!meta title="Whats the difference between a Docker container and a virtual machine?" ]]
+
+Docker is much **faster** and more productive to work with than a Virtual
+machine. It's basically a front end on [**Linux containers** aka
+LXC](https://en.wikipedia.org/wiki/LXC).
+
+Ideally Docker is running on a "bare metal" machine since it does not need to
+virtualised. It runs a lot faster, but admittedly you probably won't see the
+performance boon if you are running on a already virtualised VPS like EC2 or
+MacOS.
+
+Docker runs whole services in containers which are isolated and can be
+controlled much like a Virtual machine.
+
+But Docker is much more. For example the <https://hub.docker.com/> is a public
+repository where you can almost pick a pre-**packaged** service off the shelf.
+Instead of running Nginx packaged by your distribution for example, you can run
+the [official nginx Docker image from Nginx](https://hub.docker.com/_/nginx/),
+who know the software and dependencies best.
+
+Docker defines a
+[Dockerfile](https://docs.docker.com/engine/reference/builder/) which like
+`debian/build` or Arch's `PKGBUILD` is a very succinct way of describing how a
+service is packaged and furthermore deployed in a network and run.
+
+Docker has a layered filesystem to make it easy for developers to isolate
+changes and to iterate quickly. You can easily keep uptodate or roll back and
+forth atomically using image tags.
+
+Docker has a raft of handy tools in their popular ecosystem to help network
+containers and manage mounts between the container and host.
+[Compose](https://docs.docker.com/compose/) is a advanced tool with a
+definition language for managing a whole formation. A complex network of
+services can be launched and maintained using it.
diff --git a/blog/whats_the_difference_docker_container_and_a_virtual_machine? b/blog/whats_the_difference_docker_container_and_a_virtual_machine?
deleted file mode 100644
index 0b0feb3..0000000
--- a/blog/whats_the_difference_docker_container_and_a_virtual_machine?
+++ /dev/null
@@ -1,34 +0,0 @@
-[[!meta title="Whats the difference between a Docker container and a virtual machine?" ]]
-
-Docker is much **faster** and more productive to work with than a Virtual
-machine. It's basically a front end on [**Linux containers** aka
-LXC](https://en.wikipedia.org/wiki/LXC).
-
-Ideally Docker is running on a "bare metal" machine since it does not need to
-virtualised. It runs a lot faster, but admittedly you probably won't see the
-performance boon if you are running on a already virtualised VPS like EC2 or
-MacOS.
-
-Docker runs whole services in containers which are isolated and can be
-controlled much like a Virtual machine.
-
-But Docker is much more. For example the <https://hub.docker.com/> is a public
-repository where you can almost pick a pre-**packaged** service off the shelf.
-Instead of running Nginx packaged by your distribution for example, you can run
-the [official nginx Docker image from Nginx](https://hub.docker.com/_/nginx/),
-who know the software and dependencies best.
-
-Docker defines a
-[Dockerfile](https://docs.docker.com/engine/reference/builder/) which like
-`debian/build` or Arch's `PKGBUILD` is a very succinct way of describing how a
-service is packaged and furthermore deployed in a network and run.
-
-Docker has a layered filesystem to make it easy for developers to isolate
-changes and to iterate quickly. You can easily keep uptodate or roll back and
-forth atomically using image tags.
-
-Docker has a raft of handy tools in their popular ecosystem to help network
-containers and manage mounts between the container and host.
-[Compose](https://docs.docker.com/compose/) is a advanced tool with a
-definition language for managing a whole formation. A complex network of
-services can be launched and maintained using it.

Dump some reasoning
diff --git a/blog/whats_the_difference_docker_container_and_a_virtual_machine? b/blog/whats_the_difference_docker_container_and_a_virtual_machine?
new file mode 100644
index 0000000..0b0feb3
--- /dev/null
+++ b/blog/whats_the_difference_docker_container_and_a_virtual_machine?
@@ -0,0 +1,34 @@
+[[!meta title="Whats the difference between a Docker container and a virtual machine?" ]]
+
+Docker is much **faster** and more productive to work with than a Virtual
+machine. It's basically a front end on [**Linux containers** aka
+LXC](https://en.wikipedia.org/wiki/LXC).
+
+Ideally Docker is running on a "bare metal" machine since it does not need to
+virtualised. It runs a lot faster, but admittedly you probably won't see the
+performance boon if you are running on a already virtualised VPS like EC2 or
+MacOS.
+
+Docker runs whole services in containers which are isolated and can be
+controlled much like a Virtual machine.
+
+But Docker is much more. For example the <https://hub.docker.com/> is a public
+repository where you can almost pick a pre-**packaged** service off the shelf.
+Instead of running Nginx packaged by your distribution for example, you can run
+the [official nginx Docker image from Nginx](https://hub.docker.com/_/nginx/),
+who know the software and dependencies best.
+
+Docker defines a
+[Dockerfile](https://docs.docker.com/engine/reference/builder/) which like
+`debian/build` or Arch's `PKGBUILD` is a very succinct way of describing how a
+service is packaged and furthermore deployed in a network and run.
+
+Docker has a layered filesystem to make it easy for developers to isolate
+changes and to iterate quickly. You can easily keep uptodate or roll back and
+forth atomically using image tags.
+
+Docker has a raft of handy tools in their popular ecosystem to help network
+containers and manage mounts between the container and host.
+[Compose](https://docs.docker.com/compose/) is a advanced tool with a
+definition language for managing a whole formation. A complex network of
+services can be launched and maintained using it.

link to @edent's blog
diff --git a/blog/PDF-A_versus_HTML.mdwn b/blog/PDF-A_versus_HTML.mdwn
index 4fa9582..8e595d6 100644
--- a/blog/PDF-A_versus_HTML.mdwn
+++ b/blog/PDF-A_versus_HTML.mdwn
@@ -1,5 +1,7 @@
 [[!meta title="PDF/A versus HTML" ]]
 
+2016 update: <https://shkspr.mobi/blog/2016/07/pdfs-are-the-cheques-of-the-21st-century/> has a good summary why PDF is a bad format
+
 Latest: Leonard Rosenthol has since posted a [followup on the discussion](http://acroeng.adobe.com/leonardr/PDFA_vs_HTML.html).
 
 [PDFSAGE](http://twitter.com/pdfsage/status/2313664922) wondered what the cons

Image
diff --git a/e/17011.mdwn b/e/17011.mdwn
index 973e59c..b255746 100644
--- a/e/17011.mdwn
+++ b/e/17011.mdwn
@@ -1,5 +1,7 @@
 [[!meta title="Interesting journalctl logs"]]
 
+<img src=http://s.natalian.org/2016-07-05/1467701001_2548x1380.png alt="systemctl --failed">
+
 See failed services:
 
 	systemctl --failed

Tip
diff --git a/e/17011.mdwn b/e/17011.mdwn
new file mode 100644
index 0000000..973e59c
--- /dev/null
+++ b/e/17011.mdwn
@@ -0,0 +1,11 @@
+[[!meta title="Interesting journalctl logs"]]
+
+See failed services:
+
+	systemctl --failed
+
+See erroring line:
+
+	journalctl -b -p err
+
+Many thanks to [grawity](https://github.com/grawity)

New blog
diff --git a/blog/Do_not_btrfs_device_remove.mdwn b/blog/Do_not_btrfs_device_remove.mdwn
new file mode 100644
index 0000000..ebaca58
--- /dev/null
+++ b/blog/Do_not_btrfs_device_remove.mdwn
@@ -0,0 +1,31 @@
+Do NOT `btrfs device delete` which is the synonymous with `btrfs device remove`
+when removing a disk from a [btrfs](https://en.wikipedia.org/wiki/Btrfs) RAID
+array.
+
+# Why?
+
+**Deleting the device from the array removes the data from it (as 
+mentioned above), and wipes all BTRFS specific signatures as well.**
+
+The device {remove, delete} command is for shrinking arrays, removing failing
+disks, or for re-purposing individual drives.
+
+Here's the [start of my
+thread](http://www.spinics.net/lists/linux-btrfs/msg55937.html) on
+[linux-btrfs](http://vger.kernel.org/vger-lists.html#linux-btrfs), which has
+all the details.
+
+# So how do I split an RAID1 array?
+
+Well, the experts agree that this shouldn't be done. If you want to take out a
+mirrored copy, instead `btrfs send` a snapshot to create another copy
+somewhere.
+
+However if you _really_ want to do this, you would have physically remove the
+disk once unmounted. Then it should be mounted next time in a degraded state.
+And add a new drive and **need to be certain to run a balance with
+-dconvert=raid1 -mconvert=raid1 to clean up anything that got allocated before
+the new disk was added.**
+
+Ideally, it shouldn't be needed at all, it's just needed due to a deficiency in
+the high-level allocator in BTRFS.

Less is more.
diff --git a/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn b/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn
index e468d06..6efea0b 100644
--- a/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn
+++ b/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn
@@ -7,36 +7,7 @@ Unfortunately most FAQs seem to fail to do this. To save you time doing this by
 hand, I have written [toc](https://github.com/kaihendry/toc), a tool that
 creates a table of contents from your headers with `id` anchors.
 
-FAQ template, `faq.src.html`:
-
-	<h2 class="no-toc no-num">Frequently Asked Questions</h2>
-	<div data-fill-with="table-of-contents" ><!-- toc --></div>
-
-	<h3>How do I create a FAQ?</h3>
-	<p>Using HTML</p>
-
-	<h3>What's the best kiosk software out there?</h3>
-	<p><a href="http://webconverger.com">Webconverger</a></p>
-
-Run `toc faq.src.html > faq.html` and boom:
-
-	<h2 class="no-toc no-num">Frequently Asked Questions</h2>
-	<div id=tocwrapper>
-	<!--begin-toc-->
-	<ol class=toc>
-	 <li><a href=#how-do-i-create-a-faq?><span class=secno>1 </span>How do I create a FAQ?</a></li>
-	 <li><a href="#what's-the-best-kiosk-software-out-there?"><span class=secno>2 </span>What's the best kiosk software out there?</a></li></ol>
-	<!--end-toc--></div>
-
-	<h3 id=how-do-i-create-a-faq?><span class=secno>1 </span>How do I create a FAQ?</h3>
-	<p>Using HTML</p>
-
-	<h3 id="what's-the-best-kiosk-software-out-there?"><span class=secno>2 </span>What's the best kiosk software out there?</h3>
-	<p><a href=http://webconverger.com>Webconverger</a></p>
-
-Job done. Here is a better [example FAQ](https://config.webconverger.com/faq/)
-
-Here is a Makefile I use a lot on my Websites:
+Here is a Makefile I use on my Websites:
 
 	INFILES = $(shell find . -name "*.src.html")
 	OUTFILES = $(INFILES:.src.html=.html)
@@ -53,3 +24,5 @@ Here is a Makefile I use a lot on my Websites:
 		rm -f $(OUTFILES)
 
 	PHONY: all clean
+
+`m4` is used for inserting footers and such.

Update
diff --git a/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn b/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn
index c3e2e72..e468d06 100644
--- a/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn
+++ b/blog/How_to_create_a_FAQ_that_does_not_suck.mdwn
@@ -4,12 +4,13 @@ What does a FAQ need?
 2. A way to hyperlink the question
 
 Unfortunately most FAQs seem to fail to do this. To save you time doing this by
-hand, I highly recommend [anolis](https://aur.archlinux.org/packages/python2-anolis/)
+hand, I have written [toc](https://github.com/kaihendry/toc), a tool that
+creates a table of contents from your headers with `id` anchors.
 
 FAQ template, `faq.src.html`:
 
 	<h2 class="no-toc no-num">Frequently Asked Questions</h2>
-	<div id="tocwrapper"><!-- toc --></div>
+	<div data-fill-with="table-of-contents" ><!-- toc --></div>
 
 	<h3>How do I create a FAQ?</h3>
 	<p>Using HTML</p>
@@ -17,7 +18,7 @@ FAQ template, `faq.src.html`:
 	<h3>What's the best kiosk software out there?</h3>
 	<p><a href="http://webconverger.com">Webconverger</a></p>
 
-Run `anolis faq.src.html faq.html` and boom:
+Run `toc faq.src.html > faq.html` and boom:
 
 	<h2 class="no-toc no-num">Frequently Asked Questions</h2>
 	<div id=tocwrapper>
@@ -45,7 +46,7 @@ Here is a Makefile I use a lot on my Websites:
 
 	%.html: %.src.html
 		m4 -PEIinc $< > $(TEMP)
-		anolis $(TEMP) $@
+		toc $(TEMP) > $@
 		rm -f $(TEMP)
 
 	clean:

Quick update
diff --git a/blog/Webkit_on_Rpi2.mdwn b/blog/Webkit_on_Rpi2.mdwn
index ce14573..1d391d8 100644
--- a/blog/Webkit_on_Rpi2.mdwn
+++ b/blog/Webkit_on_Rpi2.mdwn
@@ -1,3 +1,5 @@
+UPDATE: Latest OpenGL work status seems to be here, <https://dri.freedesktop.org/wiki/VC4/> which is **Required** for Webkit2.
+
 Since creating a [Webconverger Rpi2 port](https://webconverger.org/rpi2/) based
 upon [Archlinux-Arm](http://archlinuxarm.org/) (Alarm) I've run into the issue whereby
 Webkit2 stops working after an upgrade.

Note
diff --git a/blog/Wiping_a_Xiaomi_Mi_note.mdwn b/blog/Wiping_a_Xiaomi_Mi_note.mdwn
new file mode 100644
index 0000000..b735912
--- /dev/null
+++ b/blog/Wiping_a_Xiaomi_Mi_note.mdwn
@@ -0,0 +1,55 @@
+<img src=http://s.natalian.org/2016-05-19/Data-Wipe-Failed.jpg alt="Data Wipe Failed">
+
+My friend came to me with his **Xiaomi Mi Note** that he was planning to sell,
+but was unable to reset through the menus. I managed to reset it with `fastboot
+-w` whilst holding I think 'Volume down' and the 'Power button' from a cold
+boot.
+
+Using `android-sdk-platform-tools /opt/android-sdk/platform-tools/fastboot` on
+Archlinux. Here's the log:
+
+	~$ fastboot -w
+	< waiting for any device >
+	Creating filesystem with parameters:
+		Size: 60121133056
+		Block size: 4096
+		Blocks per group: 32768
+		Inodes per group: 8192
+		Inode size: 256
+		Journal blocks: 32768
+		Label:
+		Blocks: 14678011
+		Block groups: 448
+		Reserved block group size: 1024
+	Created filesystem with 11/3670016 inodes and 276420/14678011 blocks
+	target reported max download size of 1610612736 bytes
+	Creating filesystem with parameters:
+		Size: 402653184
+		Block size: 4096
+		Blocks per group: 32768
+		Inodes per group: 8192
+		Inode size: 256
+		Journal blocks: 1536
+		Label:
+		Blocks: 98304
+		Block groups: 3
+		Reserved block group size: 23
+	Created filesystem with 11/24576 inodes and 3131/98304 blocks
+	erasing 'userdata'...
+	OKAY [ 24.552s]
+	sending 'userdata' (141083 KB)...
+	OKAY [  4.419s]
+	writing 'userdata'...
+	OKAY [  2.585s]
+	erasing 'cache'...
+	OKAY [  0.040s]
+	sending 'cache' (8336 KB)...
+	OKAY [  0.263s]
+	writing 'cache'...
+	OKAY [  0.139s]
+	finished. total time: 31.999s
+	~$ fastboot continue
+	resuming boot...
+	OKAY [  0.000s]
+	finished. total time: 0.000s
+	~$

Tweak
diff --git a/blog/Caddy_in_Docker.mdwn b/blog/Caddy_in_Docker.mdwn
index 339f49f..3a00f97 100644
--- a/blog/Caddy_in_Docker.mdwn
+++ b/blog/Caddy_in_Docker.mdwn
@@ -56,13 +56,13 @@ Most "modern" linux systems do this now. This is what my
 	WantedBy=multi-user.target
 
 You might be wondering, **why** are there these _pre_ steps to {kill,rm,pull}
-Caddy?  It looks really ugly but what we are doing here on restart is checking
-for updates and using that update. No more patching! A `restart` is almost all
-we need to ensure we are current.
+Caddy?  It looks really ugly but what we are doing here is checking for updates
+and using that update. No more patching! A `restart` is almost all we need to
+ensure we are current.
 
 Notice I bind my configuration at `/home/hendry/caddy/Caddyfile` to the
 location the container expects that to be in. These little mappings should be
-documented in the container's README or
+documented in the container's README or in the
 [Dockerfile](https://github.com/abiosoft/caddy-docker/blob/master/Dockerfile).
 
 My configuration `/home/hendry/caddy/Caddyfile` looks like:
@@ -100,7 +100,7 @@ please let me know.
 
 So typically once I edit `~/caddy/Caddyfile` with a new site, I then edit `/etc/hosts` and then `sudo systemctl restart caddy`.
 
-To have it startup on boot. `sudo systemctl enable caddy`.
+Tip: To have it startup on boot: `sudo systemctl enable caddy`.
 
 # Conclusion
 

Tweaks
diff --git a/blog/Caddy_in_Docker.mdwn b/blog/Caddy_in_Docker.mdwn
index 5b28354..339f49f 100644
--- a/blog/Caddy_in_Docker.mdwn
+++ b/blog/Caddy_in_Docker.mdwn
@@ -14,7 +14,7 @@ Assuming you have Docker setup, to run a "packaged" Caddy in a container, it can
 
 	docker run --rm -p 2015:2015 abiosoft/caddy:php
 
-* `--rm` means get rid of the image once we exit
+* `--rm` means clean up after exiting (we are just kicking the tyres here)
 * `-p 2015:2015` means expose port 2015
 * [abiosoft/caddy:php](https://github.com/abiosoft/caddy-docker/) is the name of the "packaged" maintained Docker featuring Caddy
 
@@ -23,8 +23,9 @@ Assuming you have Docker setup, to run a "packaged" Caddy in a container, it can
 It's cool because we don't have to fiddle with a golang environment or which
 binaries we should download and how to setup PHP. It just works!
 
-I don't need to actually know too much about the internals. I supply a config
-file, some directory and port mappings ... and BOOM ... I'm away!
+I don't need to actually know too much about the boring internal stuff. I
+supply a config file, some directory and port mappings ... and BOOM ... I'm
+away!
 
 Finally & most importantly to me, someone else maintains it &
 [abiosoft](https://twitter.com/abiosoft) is doing a good job!
@@ -54,6 +55,11 @@ Most "modern" linux systems do this now. This is what my
 	[Install]
 	WantedBy=multi-user.target
 
+You might be wondering, **why** are there these _pre_ steps to {kill,rm,pull}
+Caddy?  It looks really ugly but what we are doing here on restart is checking
+for updates and using that update. No more patching! A `restart` is almost all
+we need to ensure we are current.
+
 Notice I bind my configuration at `/home/hendry/caddy/Caddyfile` to the
 location the container expects that to be in. These little mappings should be
 documented in the container's README or
@@ -78,7 +84,10 @@ My configuration `/home/hendry/caddy/Caddyfile` looks like:
 		errors stdout
 	}
 
-Notice I use stdout, so I can use `journalctl -u caddy -f` to view & most
+Since I run Caddy locally on my laptop for development, I don't need Caddy's
+awesome **automatic HTTPS** feature.
+
+Notice I use `stdout`, so I can use `journalctl -u caddy -f` to view & most
 importantly maintain the logs.
 
 # Hostnames
@@ -91,4 +100,10 @@ please let me know.
 
 So typically once I edit `~/caddy/Caddyfile` with a new site, I then edit `/etc/hosts` and then `sudo systemctl restart caddy`.
 
-To have it startup on boot. `sudo systemctl enable caddy`
+To have it startup on boot. `sudo systemctl enable caddy`.
+
+# Conclusion
+
+This is my current local laptop configuration, but the production version on
+[CoreOS on Digital Ocean](https://m.do.co/c/37b3b1850b32) is almost the same
+except the hostnames and the user being **core** instead of _hendry_.

Caddy
diff --git a/blog/Caddy_in_Docker.mdwn b/blog/Caddy_in_Docker.mdwn
index 34f980f..5b28354 100644
--- a/blog/Caddy_in_Docker.mdwn
+++ b/blog/Caddy_in_Docker.mdwn
@@ -1,8 +1,8 @@
 <img src=http://s.natalian.org/2016-05-09/local-caddy.png alt="Caddy running in Docker on Archlinux">
 
-Instead of running Caddy via `go get`, I've opted on my local machine to use
-Docker. Why? Because that's how my servers run, so I thought it would be
-sensible to reproduce the environment locally on my laptop.
+Instead of running [Caddy](https://caddyserver.com/) via `go get`, I've opted
+on my local machine to use Docker. Why? Because that's how my servers run, so I
+thought it would be sensible to reproduce the environment locally on my laptop.
 
 # What is Docker?
 

Tweaks
diff --git a/blog/Caddy_in_Docker.mdwn b/blog/Caddy_in_Docker.mdwn
index 959b165..34f980f 100644
--- a/blog/Caddy_in_Docker.mdwn
+++ b/blog/Caddy_in_Docker.mdwn
@@ -31,8 +31,10 @@ Finally & most importantly to me, someone else maintains it &
 
 # So how do I integrate this to my system?
 
-I recommend using a system systemd with Docker. Most "modern" linux systems do
-this now. This is what my `/etc/systemd/system/caddy.service` looks like:
+Unsurprisingly **linux containers** best run on Linux & I recommend using a
+Linux system with [systemd](https://en.wikipedia.org/wiki/Systemd) with Docker.
+Most "modern" linux systems do this now. This is what my
+`/etc/systemd/system/caddy.service` looks like:
 
 	[Unit]
 	Description=Caddy
@@ -76,7 +78,8 @@ My configuration `/home/hendry/caddy/Caddyfile` looks like:
 		errors stdout
 	}
 
-Notice I use stdout, so I can use `journalctl -u caddy -f` to look at the logs.
+Notice I use stdout, so I can use `journalctl -u caddy -f` to view & most
+importantly maintain the logs.
 
 # Hostnames
 
@@ -84,6 +87,8 @@ I manually add the names of my hosts into `/etc/hosts`. **172.17.0.1** in my
 case is the IP of the Docker instance. If anyone know how to make this easier,
 please let me know.
 
-172.17.0.1 natalian config
+	172.17.0.1 natalian config
 
 So typically once I edit `~/caddy/Caddyfile` with a new site, I then edit `/etc/hosts` and then `sudo systemctl restart caddy`.
+
+To have it startup on boot. `sudo systemctl enable caddy`

Quick blog
diff --git a/blog/Caddy_in_Docker.mdwn b/blog/Caddy_in_Docker.mdwn
new file mode 100644
index 0000000..959b165
--- /dev/null
+++ b/blog/Caddy_in_Docker.mdwn
@@ -0,0 +1,89 @@
+<img src=http://s.natalian.org/2016-05-09/local-caddy.png alt="Caddy running in Docker on Archlinux">
+
+Instead of running Caddy via `go get`, I've opted on my local machine to use
+Docker. Why? Because that's how my servers run, so I thought it would be
+sensible to reproduce the environment locally on my laptop.
+
+# What is Docker?
+
+I think of Docker as simply a Linux container front end. Like a package manager
+like `npm` say. However instead of searching for dependencies, you usually have
+the entire self-contained product in a Docker container.
+
+Assuming you have Docker setup, to run a "packaged" Caddy in a container, it can be as simple as running:
+
+	docker run --rm -p 2015:2015 abiosoft/caddy:php
+
+* `--rm` means get rid of the image once we exit
+* `-p 2015:2015` means expose port 2015
+* [abiosoft/caddy:php](https://github.com/abiosoft/caddy-docker/) is the name of the "packaged" maintained Docker featuring Caddy
+
+# So why is this cool?
+
+It's cool because we don't have to fiddle with a golang environment or which
+binaries we should download and how to setup PHP. It just works!
+
+I don't need to actually know too much about the internals. I supply a config
+file, some directory and port mappings ... and BOOM ... I'm away!
+
+Finally & most importantly to me, someone else maintains it &
+[abiosoft](https://twitter.com/abiosoft) is doing a good job!
+
+# So how do I integrate this to my system?
+
+I recommend using a system systemd with Docker. Most "modern" linux systems do
+this now. This is what my `/etc/systemd/system/caddy.service` looks like:
+
+	[Unit]
+	Description=Caddy
+	After=docker.service
+	Requires=docker.service
+
+	[Service]
+	TimeoutStartSec=0
+	User=hendry
+	ExecStartPre=-/usr/bin/docker kill caddy
+	ExecStartPre=-/usr/bin/docker rm caddy
+	ExecStartPre=-/usr/bin/docker pull abiosoft/caddy:php
+	ExecStart=/usr/bin/docker run --name caddy -v /home/hendry/caddy/Caddyfile:/etc/Caddyfile -v /srv/www:/srv/ -v /home/hendry/caddy/caddy-certs:/root/.caddy -p 80:80 -p 443:443 abiosoft/caddy:php
+	RestartSec=5
+	Restart=always
+
+	[Install]
+	WantedBy=multi-user.target
+
+Notice I bind my configuration at `/home/hendry/caddy/Caddyfile` to the
+location the container expects that to be in. These little mappings should be
+documented in the container's README or
+[Dockerfile](https://github.com/abiosoft/caddy-docker/blob/master/Dockerfile).
+
+My configuration `/home/hendry/caddy/Caddyfile` looks like:
+
+	natalian:80 {
+		tls off
+		root natalian
+		header / Content-Encoding "gzip"
+		log stdout
+		errors stdout
+	}
+
+	config:80 {
+		tls off
+		root config
+		startup php-fpm
+		fastcgi / 127.0.0.1:9000 php
+		log stdout
+		errors stdout
+	}
+
+Notice I use stdout, so I can use `journalctl -u caddy -f` to look at the logs.
+
+# Hostnames
+
+I manually add the names of my hosts into `/etc/hosts`. **172.17.0.1** in my
+case is the IP of the Docker instance. If anyone know how to make this easier,
+please let me know.
+
+172.17.0.1 natalian config
+
+So typically once I edit `~/caddy/Caddyfile` with a new site, I then edit `/etc/hosts` and then `sudo systemctl restart caddy`.

Update
diff --git a/blog/Samba_sharing_with_undelete.mdwn b/blog/Samba_sharing_with_undelete.mdwn
index 7d56d95..a78a9fe 100644
--- a/blog/Samba_sharing_with_undelete.mdwn
+++ b/blog/Samba_sharing_with_undelete.mdwn
@@ -48,3 +48,7 @@ stage & I wouldn't recommend doing this.
 
 * [btrfs fi show /dev/sdc1](http://s.natalian.org/2016-05-02/show.txt)
 * [btrfs filesystem usage /mnt/raid1/](http://s.natalian.org/2016-05-02/usage.txt)
+
+Btw I [asked the samba
+list](https://lists.samba.org/archive/samba/2016-May/199652.html) for some
+clarity regarding the PAM/passwd integration.

Note about RAID1
diff --git a/blog/Samba_sharing_with_undelete.mdwn b/blog/Samba_sharing_with_undelete.mdwn
index bf2b9f7..7d56d95 100644
--- a/blog/Samba_sharing_with_undelete.mdwn
+++ b/blog/Samba_sharing_with_undelete.mdwn
@@ -40,3 +40,11 @@ and no add script defined` otherwise.
 # client /etc/fstab
 
 	//nuc.local/raid1 /mnt/raid1 cifs username=hendry,uid=1000,gid=100,noauto,nofail,user
+
+# RAID1 configuration with USB drives
+
+Btw I am actually RAID1 two external USB drives. It's **EXPERIMENTAL** at this
+stage & I wouldn't recommend doing this.
+
+* [btrfs fi show /dev/sdc1](http://s.natalian.org/2016-05-02/show.txt)
+* [btrfs filesystem usage /mnt/raid1/](http://s.natalian.org/2016-05-02/usage.txt)

Samba sharing
diff --git a/blog/Samba_sharing_with_undelete.mdwn b/blog/Samba_sharing_with_undelete.mdwn
new file mode 100644
index 0000000..bf2b9f7
--- /dev/null
+++ b/blog/Samba_sharing_with_undelete.mdwn
@@ -0,0 +1,42 @@
+<iframe width="560" height="315" src="https://www.youtube.com/embed/GwhtoeMx1I8" frameborder="0" allowfullscreen></iframe>
+
+# /etc/samba/smb.conf
+
+	[global]
+	   workgroup = 888
+	   server string = nuc
+	   security = user
+	   syslog only = no
+	   dns proxy = no
+	   log level = 3
+	   force directory mode = 0777
+	   force create mode = 0777
+	   guest ok = no
+	   force user = root
+	   force group = root
+	   read only = no
+	   valid users = rufie hendry
+	   vfs object = recycle
+	   recycle:repository = DELETED
+	   recycle:keeptree = yes
+	   recycle:directory_mode=777
+	   veto files = /._*/.DS_Store/
+	   delete veto files = yes
+
+	[raid1]
+	   comment = Raided store
+	   path = /mnt/raid1
+
+	[ext2tb]
+	   comment = Nonresilent
+	   path = /mnt/2tb
+
+On [Arch's Samba](https://wiki.archlinux.org/index.php/Samba) I found that I
+had to set the password on an existing user like so `smbpasswd -a rufie`. I
+have no idea where the password is actually stored. I found this would no work
+unless the existing user was in PAM. You will get a `Could not find user BLAH
+and no add script defined` otherwise.
+
+# client /etc/fstab
+
+	//nuc.local/raid1 /mnt/raid1 cifs username=hendry,uid=1000,gid=100,noauto,nofail,user

Docker Tip
diff --git a/e/14011.mdwn b/e/14011.mdwn
new file mode 100644
index 0000000..e7a1704
--- /dev/null
+++ b/e/14011.mdwn
@@ -0,0 +1,5 @@
+[[!meta title="Debugging running Docker image"]]
+
+$name of docker image
+
+	docker exec -it $name /bin/sh

Another tip
diff --git a/e/14010.mdwn b/e/14010.mdwn
new file mode 100644
index 0000000..6ca7e8a
--- /dev/null
+++ b/e/14010.mdwn
@@ -0,0 +1,17 @@
+[[!meta title="AWS S3 versioning nuances"]]
+
+<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Oh crap, I just noticed that <a href="https://twitter.com/awscloud">@awscloud</a> versioning duplicates the file even if the file is exactly the same. DOH! <a href="https://t.co/xLDGOJryO0">https://t.co/xLDGOJryO0</a></p>&mdash; Kai Hendry (@kaihendry) <a href="https://twitter.com/kaihendry/status/723358363191402502">April 22, 2016</a></blockquote>
+<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
+
+**HOWEVER** If you use [sync](http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html) then the file will not be replaced, unless it has changed!
+
+	aws --profile jewel s3 sync bar s3://s3test.jptdev.com/
+
+Unfortunately there is no way to be notified if a file has been replaced AFAICT!
+
+<img src=http://s.natalian.org/2016-04-22/sns-needed-for-replace.png>
+
+There is a [S3
+event](http://docs.aws.amazon.com/AmazonS3/latest/UG/SettingBucketNotifications.html)
+<abbr title="A Reduced Redundancy Storage (RRS) object lost
+event">RRSObjectLost</abbr> which is quite interesting.

new tip
diff --git a/e/14009.mdwn b/e/14009.mdwn
new file mode 100644
index 0000000..4f7009b
--- /dev/null
+++ b/e/14009.mdwn
@@ -0,0 +1,7 @@
+[[!meta title="Publish AWS S3 events to email"]]
+
+Unable to validate the following destination configurations : Permissions on the destination topic do not allow S3 to publish notifications from this bucket
+
+FIX: Edit topic policy!
+
+<img src=http://s.natalian.org/2016-04-22/1461297764_2558x1404.png alt="Edit AWS SNS topic policy">

Anonymouse
diff --git a/e/14007.mdwn b/e/14007.mdwn
index 7de7690..e6a9251 100644
--- a/e/14007.mdwn
+++ b/e/14007.mdwn
@@ -3,17 +3,17 @@
 Yes, [versioning on my bucket is enabled!](http://docs.aws.amazon.com/AmazonS3/latest/UG/enable-bucket-versioning.html)
 
 
-	/tmp/test$ aws --profile jewel s3 ls s3://s3test.jptdev.com/test.txt
+	/tmp/test$ aws --profile example s3 ls s3://s3test.jptdev.com/test.txt
 	2016-04-22 11:29:29         50 test.txt
 
 Make changes to `test.txt`.
 
-	/tmp/test$ aws --profile jewel s3 cp test.txt s3://s3test.jptdev.com/
+	/tmp/test$ aws --profile example s3 cp test.txt s3://s3test.jptdev.com/
 	upload: ./test.txt to s3://s3test.jptdev.com/test.txt
 
 Notice the `aws-cli/1.10.20`'s `list-object-versions` seems to work with **prefix** not **key**.
 
-	aws --profile jewel s3api list-object-versions --bucket s3test.jptdev.com --prefix test.txt
+	aws --profile example s3api list-object-versions --bucket s3test.jptdev.com --prefix test.txt
 	{
 		"Versions": [
 			{
@@ -23,7 +23,7 @@ Notice the `aws-cli/1.10.20`'s `list-object-versions` seems to work with **prefi
 				"StorageClass": "STANDARD",
 				"Key": "test.txt",
 				"Owner": {
-					"DisplayName": "sean",
+					"DisplayName": "hendry",
 					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
 				},
 				"IsLatest": true,
@@ -36,7 +36,7 @@ Notice the `aws-cli/1.10.20`'s `list-object-versions` seems to work with **prefi
 				"StorageClass": "STANDARD",
 				"Key": "test.txt",
 				"Owner": {
-					"DisplayName": "sean",
+					"DisplayName": "hendry",
 					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
 				},
 				"IsLatest": false,
@@ -49,7 +49,7 @@ Notice the `aws-cli/1.10.20`'s `list-object-versions` seems to work with **prefi
 				"StorageClass": "STANDARD",
 				"Key": "test.txt",
 				"Owner": {
-					"DisplayName": "sean",
+					"DisplayName": "hendry",
 					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
 				},
 				"IsLatest": false,
@@ -62,7 +62,7 @@ Notice the `aws-cli/1.10.20`'s `list-object-versions` seems to work with **prefi
 				"StorageClass": "STANDARD",
 				"Key": "test.txt",
 				"Owner": {
-					"DisplayName": "sean",
+					"DisplayName": "hendry",
 					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
 				},
 				"IsLatest": false,
@@ -71,7 +71,7 @@ Notice the `aws-cli/1.10.20`'s `list-object-versions` seems to work with **prefi
 		]
 	}
 
-	/tmp/test$ aws --profile jewel s3api get-object --bucket s3test.jptdev.com --key test.txt --version-id NhFpF7hak4qe7zg0bbGIQmk6QYqyC2q9 foo.txt
+	/tmp/test$ aws --profile example s3api get-object --bucket s3test.jptdev.com --key test.txt --version-id NhFpF7hak4qe7zg0bbGIQmk6QYqyC2q9 foo.txt
 	{
 		"AcceptRanges": "bytes",
 		"ContentType": "text/plain",

New tip from t4
diff --git a/e/14007.mdwn b/e/14007.mdwn
new file mode 100644
index 0000000..7de7690
--- /dev/null
+++ b/e/14007.mdwn
@@ -0,0 +1,86 @@
+[[!meta title="AWS S3 versioning"]]
+
+Yes, [versioning on my bucket is enabled!](http://docs.aws.amazon.com/AmazonS3/latest/UG/enable-bucket-versioning.html)
+
+
+	/tmp/test$ aws --profile jewel s3 ls s3://s3test.jptdev.com/test.txt
+	2016-04-22 11:29:29         50 test.txt
+
+Make changes to `test.txt`.
+
+	/tmp/test$ aws --profile jewel s3 cp test.txt s3://s3test.jptdev.com/
+	upload: ./test.txt to s3://s3test.jptdev.com/test.txt
+
+Notice the `aws-cli/1.10.20`'s `list-object-versions` seems to work with **prefix** not **key**.
+
+	aws --profile jewel s3api list-object-versions --bucket s3test.jptdev.com --prefix test.txt
+	{
+		"Versions": [
+			{
+				"LastModified": "2016-04-22T03:29:29.000Z",
+				"VersionId": "lzDLKsb_Dgq5nMOe8pAotZszO6.cs5eq",
+				"ETag": "\"2fe9901004a6ec6ec047f53c7e185e6d\"",
+				"StorageClass": "STANDARD",
+				"Key": "test.txt",
+				"Owner": {
+					"DisplayName": "sean",
+					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
+				},
+				"IsLatest": true,
+				"Size": 50
+			},
+			{
+				"LastModified": "2016-04-22T03:17:14.000Z",
+				"VersionId": "DVgKQiDzCpK_vZ8am6KxP7DDPg4oNAys",
+				"ETag": "\"2fe9901004a6ec6ec047f53c7e185e6d\"",
+				"StorageClass": "STANDARD",
+				"Key": "test.txt",
+				"Owner": {
+					"DisplayName": "sean",
+					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
+				},
+				"IsLatest": false,
+				"Size": 50
+			},
+			{
+				"LastModified": "2016-04-22T03:14:58.000Z",
+				"VersionId": "3FF6_QlVwFCbXTTqYJqvIMHOh9uD_Zb0",
+				"ETag": "\"c9d3df23822ab54e39af74471eaa9f68\"",
+				"StorageClass": "STANDARD",
+				"Key": "test.txt",
+				"Owner": {
+					"DisplayName": "sean",
+					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
+				},
+				"IsLatest": false,
+				"Size": 34
+			},
+			{
+				"LastModified": "2016-04-22T03:14:49.000Z",
+				"VersionId": "NhFpF7hak4qe7zg0bbGIQmk6QYqyC2q9",
+				"ETag": "\"e59ff97941044f85df5297e1c302d260\"",
+				"StorageClass": "STANDARD",
+				"Key": "test.txt",
+				"Owner": {
+					"DisplayName": "sean",
+					"ID": "d68f9f3b34a478c25469ceb76ca6772fe9d3b02488a908f0562e93084c4294f7"
+				},
+				"IsLatest": false,
+				"Size": 12
+			}
+		]
+	}
+
+	/tmp/test$ aws --profile jewel s3api get-object --bucket s3test.jptdev.com --key test.txt --version-id NhFpF7hak4qe7zg0bbGIQmk6QYqyC2q9 foo.txt
+	{
+		"AcceptRanges": "bytes",
+		"ContentType": "text/plain",
+		"LastModified": "Fri, 22 Apr 2016 03:14:49 GMT",
+		"ContentLength": 12,
+		"VersionId": "NhFpF7hak4qe7zg0bbGIQmk6QYqyC2q9",
+		"ETag": "\"e59ff97941044f85df5297e1c302d260\"",
+		"Metadata": {}
+	}
+	/tmp/test$ cat foo.txt
+	Hello World
+

Alpine tip
diff --git a/blog/First_impressions_of_Django_1.9.mdwn b/blog/First_impressions_of_Django_1.9.mdwn
index 3675e39..a32f5c0 100644
--- a/blog/First_impressions_of_Django_1.9.mdwn
+++ b/blog/First_impressions_of_Django_1.9.mdwn
@@ -3,18 +3,21 @@
 
 Using python2.
 
-* [tutorial is good](https://docs.djangoproject.com/en/1.9/intro/tutorial01/)
+* [tutorial is good](https://docs.djangoproject.com/en/1.9/intro/tutorial01/) - I think
 * This app within a project, but an app can be in many projects seems a bit wierd. Prefer just "app" (KISS).
 * Where's the `.gitignore` file and the `Dockerfile`?
-* sqlite seems fine, so at what point is it worth taking the leap to postgres?
+* sqlite seems fine, so at what point is it worth taking the leap to postgres? Really not looking forward to devops headaches here
 * manage.py is good. `manage.py runserver` auto-reloading is great. So is the debug. So is the shell (didn't expect that!).
 * admin interface is great, except for that wierd `---------` default action. I hope I can fine tune it though.
 * change history in the admin interface is a very welcome surprise. But sadly if the change didn't happen in the admin interface it's not tracked. :(
 * the [double underscore thing](https://docs.djangoproject.com/en/1.9/intro/tutorial02/) was a bit jarring.
 * following the tutorial, i.e. copying and pasting into vim was a gigantic PITA. there must be a better way. Remembering to `:set paste`... arghghghgh
-* the [vim django tips on the wiki](https://code.djangoproject.com/wiki/UsingVimWithDjango) are good. ditched [python-mode](https://twitter.com/kaihendry/status/705696319789096960) for [syntastic](https://github.com/scrooloose/syntastic) and I'm happier
+* the [vim django tips on the wiki](https://code.djangoproject.com/wiki/UsingVimWithDjango) are good. ditched [python-mode](https://twitter.com/kaihendry/status/705696319789096960) for [syntastic](https://github.com/scrooloose/syntastic) and I'm happier (for some minutes)
+* Noticed that Django community can't seem to decide on a project layout
 
 How do you figure out that `reverse` comes from `from django.core.urlresolvers import reverse`? Need something like goimports!
 
 * the [power of the admin page](https://docs.djangoproject.com/en/1.9/intro/tutorial07/) make me think I should be using that as a basis of my customer facing forms.
 * the section on [reusable apps](https://docs.djangoproject.com/en/1.9/intro/reusable-apps/) was bit beyond remit of a tutorial I feel. Are all these [Github search results](https://github.com/search?q=django&ref=simplesearch&type=Repositories&utf8=%E2%9C%93) sort of apps I can drop into my project?
+
+* struggling to understand how `models.py` maps to JSON schema
diff --git a/e/17010.mdwn b/e/17010.mdwn
new file mode 100644
index 0000000..792f09e
--- /dev/null
+++ b/e/17010.mdwn
@@ -0,0 +1,8 @@
+[[!meta title="how do I list the files in a Alpine package?"]]
+
+	/srv # apk info -L php-json
+	php-json-5.6.17-r0 contains:
+	etc/php/conf.d/json.ini
+	usr/lib/php/modules/json.so
+
+

Rescue notes
diff --git a/blog/First_impressions_of_Django_1.9.mdwn b/blog/First_impressions_of_Django_1.9.mdwn
index 3ee0e1f..3675e39 100644
--- a/blog/First_impressions_of_Django_1.9.mdwn
+++ b/blog/First_impressions_of_Django_1.9.mdwn
@@ -4,7 +4,7 @@
 Using python2.
 
 * [tutorial is good](https://docs.djangoproject.com/en/1.9/intro/tutorial01/)
-* This app within a project, but an app can be in many projects seems a bit wierd. Prefer just app (KISS).
+* This app within a project, but an app can be in many projects seems a bit wierd. Prefer just "app" (KISS).
 * Where's the `.gitignore` file and the `Dockerfile`?
 * sqlite seems fine, so at what point is it worth taking the leap to postgres?
 * manage.py is good. `manage.py runserver` auto-reloading is great. So is the debug. So is the shell (didn't expect that!).
diff --git a/blog/Openwrt_rescue.mdwn b/blog/Openwrt_rescue.mdwn
new file mode 100644
index 0000000..ee3164c
--- /dev/null
+++ b/blog/Openwrt_rescue.mdwn
@@ -0,0 +1,11 @@
+When resetting an OpenWRT router into failsafe mode, it usually comes up as
+192.168.1.1 with no DHCPD. So you need to manually set your machine 192.168.1.1
+and telnet to it.
+
+This is non-trivial in Linux. For example my wired network interface's name is
+`enp0s20u2`:
+
+	ip addr add 192.168.1.2/24 broadcast 192.168.1.255 dev enp0s20u2
+	route add default gw 192.168.1.1 dev enp0s20u2
+
+You need the route to make sure you direct traffic to `192.168.1.1`.

Apps
diff --git a/blog/First_impressions_of_Django_1.9.mdwn b/blog/First_impressions_of_Django_1.9.mdwn
index dbdcba2..3ee0e1f 100644
--- a/blog/First_impressions_of_Django_1.9.mdwn
+++ b/blog/First_impressions_of_Django_1.9.mdwn
@@ -17,4 +17,4 @@ Using python2.
 How do you figure out that `reverse` comes from `from django.core.urlresolvers import reverse`? Need something like goimports!
 
 * the [power of the admin page](https://docs.djangoproject.com/en/1.9/intro/tutorial07/) make me think I should be using that as a basis of my customer facing forms.
-* the section on [reusable apps](https://docs.djangoproject.com/en/1.9/intro/reusable-apps/) was bit beyond remit of a tutorial I feel
+* the section on [reusable apps](https://docs.djangoproject.com/en/1.9/intro/reusable-apps/) was bit beyond remit of a tutorial I feel. Are all these [Github search results](https://github.com/search?q=django&ref=simplesearch&type=Repositories&utf8=%E2%9C%93) sort of apps I can drop into my project?

More
diff --git a/blog/First_impressions_of_Django_1.9.mdwn b/blog/First_impressions_of_Django_1.9.mdwn
index d16282b..dbdcba2 100644
--- a/blog/First_impressions_of_Django_1.9.mdwn
+++ b/blog/First_impressions_of_Django_1.9.mdwn
@@ -13,3 +13,8 @@ Using python2.
 * the [double underscore thing](https://docs.djangoproject.com/en/1.9/intro/tutorial02/) was a bit jarring.
 * following the tutorial, i.e. copying and pasting into vim was a gigantic PITA. there must be a better way. Remembering to `:set paste`... arghghghgh
 * the [vim django tips on the wiki](https://code.djangoproject.com/wiki/UsingVimWithDjango) are good. ditched [python-mode](https://twitter.com/kaihendry/status/705696319789096960) for [syntastic](https://github.com/scrooloose/syntastic) and I'm happier
+
+How do you figure out that `reverse` comes from `from django.core.urlresolvers import reverse`? Need something like goimports!
+
+* the [power of the admin page](https://docs.djangoproject.com/en/1.9/intro/tutorial07/) make me think I should be using that as a basis of my customer facing forms.
+* the section on [reusable apps](https://docs.djangoproject.com/en/1.9/intro/reusable-apps/) was bit beyond remit of a tutorial I feel

Update link
diff --git a/blog/Centralisation_censorship_side_effect.mdwn b/blog/Centralisation_censorship_side_effect.mdwn
index c9d537b..aa1eab3 100644
--- a/blog/Centralisation_censorship_side_effect.mdwn
+++ b/blog/Centralisation_censorship_side_effect.mdwn
@@ -1,8 +1,9 @@
 [[!meta title="Side effect of centralisation WRT censorship"]]
 
 UPDATE 2016-01-28: Unsurprisingly Sarawak Report moved to
-[Medium](https://medium.com/) and the Malaysian government proceeded to block
-medium.com hilariously. If only Sarawak Report moved to http://s3.amazonaws.com ... lol
+[Medium](https://medium.com/medium-legal/the-post-stays-up-d222e34cb7e7#.6p7uuhxmo)
+and the Malaysian government proceeded to block medium.com hilariously. If only
+Sarawak Report moved to http://s3.amazonaws.com ... lol
 
 I'm no fan of massively centralised services such as Google's Youtube,
 Facebook, Twitter and Reddit, since I feel there is too much power in one

more so
diff --git a/blog/First_impressions_of_Django_1.9.mdwn b/blog/First_impressions_of_Django_1.9.mdwn
index 87b954c..d16282b 100644
--- a/blog/First_impressions_of_Django_1.9.mdwn
+++ b/blog/First_impressions_of_Django_1.9.mdwn
@@ -3,13 +3,13 @@
 
 Using python2.
 
-
 * [tutorial is good](https://docs.djangoproject.com/en/1.9/intro/tutorial01/)
-* This app within a project, but an app can be in many projects seems a bit wierd.
+* This app within a project, but an app can be in many projects seems a bit wierd. Prefer just app (KISS).
 * Where's the `.gitignore` file and the `Dockerfile`?
-* sqlite seems fine, so at what point is it worth taking the leap to postgres? wish it was NoSQL tbh
-* manage.py is good. `manage.py runserver` auto-reloading is great. So is the debug. So is the shell.
+* sqlite seems fine, so at what point is it worth taking the leap to postgres?
+* manage.py is good. `manage.py runserver` auto-reloading is great. So is the debug. So is the shell (didn't expect that!).
 * admin interface is great, except for that wierd `---------` default action. I hope I can fine tune it though.
-* change history in the admin interface is a very welcome surprise. But sadly if the change didn't happen in the admin interface it's not reflected. :(
-* the double underscore thing was a bit jarring. 
-* following the tutorial, i.e. copying and pasting into vim was a gigantic PITA. there must be a better way.
+* change history in the admin interface is a very welcome surprise. But sadly if the change didn't happen in the admin interface it's not tracked. :(
+* the [double underscore thing](https://docs.djangoproject.com/en/1.9/intro/tutorial02/) was a bit jarring.
+* following the tutorial, i.e. copying and pasting into vim was a gigantic PITA. there must be a better way. Remembering to `:set paste`... arghghghgh
+* the [vim django tips on the wiki](https://code.djangoproject.com/wiki/UsingVimWithDjango) are good. ditched [python-mode](https://twitter.com/kaihendry/status/705696319789096960) for [syntastic](https://github.com/scrooloose/syntastic) and I'm happier

ongoing
diff --git a/blog/First_impressions_of_Django_1.9.mdwn b/blog/First_impressions_of_Django_1.9.mdwn
new file mode 100644
index 0000000..87b954c
--- /dev/null
+++ b/blog/First_impressions_of_Django_1.9.mdwn
@@ -0,0 +1,15 @@
+	~$ python2 -c "import django; print(django.get_version())"
+	1.9.2
+
+Using python2.
+
+
+* [tutorial is good](https://docs.djangoproject.com/en/1.9/intro/tutorial01/)
+* This app within a project, but an app can be in many projects seems a bit wierd.
+* Where's the `.gitignore` file and the `Dockerfile`?
+* sqlite seems fine, so at what point is it worth taking the leap to postgres? wish it was NoSQL tbh
+* manage.py is good. `manage.py runserver` auto-reloading is great. So is the debug. So is the shell.
+* admin interface is great, except for that wierd `---------` default action. I hope I can fine tune it though.
+* change history in the admin interface is a very welcome surprise. But sadly if the change didn't happen in the admin interface it's not reflected. :(
+* the double underscore thing was a bit jarring. 
+* following the tutorial, i.e. copying and pasting into vim was a gigantic PITA. there must be a better way.

Link video
diff --git a/blog/Developing_Docker_container_workflow.mdwn b/blog/Developing_Docker_container_workflow.mdwn
index e0f0b0c..2bd33c5 100644
--- a/blog/Developing_Docker_container_workflow.mdwn
+++ b/blog/Developing_Docker_container_workflow.mdwn
@@ -1,3 +1,5 @@
+Companion [video on the issues I have with Docker](https://www.youtube.com/watch?v=tQZfCOpXJmE)
+
 In an attempt to improve my previous [[Docker_container_update_workflow]], I
 want to write my notes on how I develop Dockerfiles et al.
 

New blog
diff --git a/blog/Developing_Docker_container_workflow.mdwn b/blog/Developing_Docker_container_workflow.mdwn
new file mode 100644
index 0000000..e0f0b0c
--- /dev/null
+++ b/blog/Developing_Docker_container_workflow.mdwn
@@ -0,0 +1,32 @@
+In an attempt to improve my previous [[Docker_container_update_workflow]], I
+want to write my notes on how I develop Dockerfiles et al.
+
+First I find the source repository of the Docker image I'm interested in.
+
+For example https://hub.docker.com/r/abiosoft/caddy/ &rarr; https://github.com/abiosoft/caddy-docker
+
+I then fork that into my repo and clone it, into for example ~/tmp/caddy-docker.
+
+Typically I would make changes to the Dockerfile. Then:
+
+1. Run a build `docker build -t WIP .`
+* Run it `docker run -d --name caddy -p 2015:2015 -t WIP`
+* See it running with `docker ps`
+* Run a shell like so: `docker exec -i -t caddy sh`
+* Stop it like so `docker kill caddy`
+* To start it again `docker start caddy`
+* Remove it like so `docker rm caddy`
+
+
+Hopefully I then have a Dockerfile I like.
+
+I then `docker login` and push my image to the Docker hub and perhaps test on
+a production CoreOS machine.
+
+When satisfied I commit the changes to my git repo and create a PR.
+
+# Unsolved issues of my workflow
+
+I would like to `git diff` changes in the running Docker filesystem.
+
+I would like to examine files in the Docker images without shelling in.

More obs
diff --git a/blog/Mail_from_a_VPS.mdwn b/blog/Mail_from_a_VPS.mdwn
index 0ac0464..33ec2ae 100644
--- a/blog/Mail_from_a_VPS.mdwn
+++ b/blog/Mail_from_a_VPS.mdwn
@@ -7,7 +7,7 @@ can send as that address and only to that address.
 For your sanity keep track of the AWS region you are in! I am using [Oregon aka
 us-west-2](http://s.natalian.org/2015-12-10/1449713100_1918x1058.png).
 
-I tested both **ssmtp** and **mstmp** !
+I tested both **ssmtp** and [msmtp](http://msmtp.sourceforge.net/) !
 
 # /etc/ssmtp/ssmtp.conf
 
@@ -64,3 +64,5 @@ I recommend msmtp since the revaliases is just:
 Though `ssmtp` has nicer logging. If you use ssmtp, change the binary:
 
 	cat | /usr/sbin/ssmtp -i -- $v
+
+Also [ssmtp weighs in at 5k SLOC whilst msmtp is 18k](http://s.natalian.org/2016-02-11/1455175273_1912x1036.png)

https
diff --git a/templates/page.tmpl b/templates/page.tmpl
index 88f1033..c7d78df 100644
--- a/templates/page.tmpl
+++ b/templates/page.tmpl
@@ -226,7 +226,7 @@ Last edited <TMPL_VAR MTIME>
 
 <fieldset style='margin: 2em; font-family "Helvetica Neue Thin, sans-serif";' class=feedback>
 <legend>Feedback</legend>
-<form onsubmit="return feedback(this);" style="margin: 1em;" action="http://feedback.dabase.com/feedback/feedback.php" method="post">
+<form onsubmit="return feedback(this);" style="margin: 1em;" action="https://feedback.dabase.com/feedback/feedback.php" method="post">
 
 <p class=field>
 <label for=from class=fieldname>

Upload videos
diff --git a/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn b/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn
index b202449..9b8125d 100644
--- a/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn
+++ b/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn
@@ -15,3 +15,7 @@ The workaround is to:
 And then reboot the iPhone to fix it.
 
 Here is a complete script I use to archive my photos: <http://s.natalian.org/2016-01-28/datemove.sh>
+
+
+Oh btw, this is how I am currently using ffprobe to YYYY-MM-DD prefix by **creation time** my video backups:
+<http://s.natalian.org/2016-01-29/moviemove.sh>

fix link
diff --git a/e/13043.mdwn b/e/13043.mdwn
index 8582e3b..7f0a4f9 100644
--- a/e/13043.mdwn
+++ b/e/13043.mdwn
@@ -1,6 +1,6 @@
 [[!meta title="Finding the rotation of a iPhone video"]]
 
-Using `ffprobe` which should be included in a [[ffmpeg]](https://twitter.com/FFmpeg) distribution:
+Using `ffprobe` which should be included in a [ffmpeg](https://twitter.com/FFmpeg) distribution:
 
 
 	for m in *.MOV

Archive script
diff --git a/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn b/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn
index 948e6f9..b202449 100644
--- a/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn
+++ b/blog/Archiving_iPhone_images_with_Archlinux_and_ifuse.mdwn
@@ -13,3 +13,5 @@ The workaround is to:
 	rm /mnt/iphone/PhotoData/Photos.sqlite
 
 And then reboot the iPhone to fix it.
+
+Here is a complete script I use to archive my photos: <http://s.natalian.org/2016-01-28/datemove.sh>
diff --git a/blog/Centralisation_censorship_side_effect.mdwn b/blog/Centralisation_censorship_side_effect.mdwn
index 4020a65..c9d537b 100644
--- a/blog/Centralisation_censorship_side_effect.mdwn
+++ b/blog/Centralisation_censorship_side_effect.mdwn
@@ -2,7 +2,7 @@
 
 UPDATE 2016-01-28: Unsurprisingly Sarawak Report moved to
 [Medium](https://medium.com/) and the Malaysian government proceeded to block
-medium.com hilariously. If only Sarawak report moved to http://s3.amazonaws.com ... lol
+medium.com hilariously. If only Sarawak Report moved to http://s3.amazonaws.com ... lol
 
 I'm no fan of massively centralised services such as Google's Youtube,
 Facebook, Twitter and Reddit, since I feel there is too much power in one

S3
diff --git a/blog/Centralisation_censorship_side_effect.mdwn b/blog/Centralisation_censorship_side_effect.mdwn
index 5728fba..4020a65 100644
--- a/blog/Centralisation_censorship_side_effect.mdwn
+++ b/blog/Centralisation_censorship_side_effect.mdwn
@@ -1,7 +1,8 @@
 [[!meta title="Side effect of centralisation WRT censorship"]]
 
-UPDATE 2016-01-28: Unsurprisingly Sarawak Report moved to Medium and the
-Malaysian government proceeded to block medium.com hilariously.
+UPDATE 2016-01-28: Unsurprisingly Sarawak Report moved to
+[Medium](https://medium.com/) and the Malaysian government proceeded to block
+medium.com hilariously. If only Sarawak report moved to http://s3.amazonaws.com ... lol
 
 I'm no fan of massively centralised services such as Google's Youtube,
 Facebook, Twitter and Reddit, since I feel there is too much power in one

Medium
diff --git a/blog/Centralisation_censorship_side_effect.mdwn b/blog/Centralisation_censorship_side_effect.mdwn
index b23f0e2..5728fba 100644
--- a/blog/Centralisation_censorship_side_effect.mdwn
+++ b/blog/Centralisation_censorship_side_effect.mdwn
@@ -1,5 +1,8 @@
 [[!meta title="Side effect of centralisation WRT censorship"]]
 
+UPDATE 2016-01-28: Unsurprisingly Sarawak Report moved to Medium and the
+Malaysian government proceeded to block medium.com hilariously.
+
 I'm no fan of massively centralised services such as Google's Youtube,
 Facebook, Twitter and Reddit, since I feel there is too much power in one
 place.

clarify
diff --git a/e/14002.mdwn b/e/14002.mdwn
index b7b625c..83477a8 100644
--- a/e/14002.mdwn
+++ b/e/14002.mdwn
@@ -1,11 +1,20 @@
 [[!meta title="Steps to make a S3 hosted Git repository"]]
 
 	$ export BUCKET=YOUR_BUCKET_NAME
+
+	# Only run these two commands once per bucket/repo
 	$ s3cmd mb s3://$BUCKET # aws s3 mb s3://$BUCKET
+	$ aws s3api put-bucket-acl --bucket $BUCKET --acl public-read
+
+	# Run These commands every time you wish to push to the S3 repo
 	$ git update-server-info
 	$ test -d .git && cd .git
 	$ s3cmd -P sync . s3://$BUCKET # aws s3 sync --acl public-read . s3://$BUCKET
-	$ aws s3api put-bucket-acl --bucket $BUCKET --acl public-read
+
+	# Run this once you want to clone from the S3 repo to another folder
 	$ git clone https://$BUCKET.s3.amazonaws.com
 
+	# Run this after cloning the repo to update from the latest changes pushed to S3
+	$ git pull
+
 To be safer, you should probably use [git bundle](http://stackoverflow.com/a/34593391/4534).

Finally
diff --git a/e/14002.mdwn b/e/14002.mdwn
index 3945f55..b7b625c 100644
--- a/e/14002.mdwn
+++ b/e/14002.mdwn
@@ -1,13 +1,11 @@
 [[!meta title="Steps to make a S3 hosted Git repository"]]
 
-Lets create a bucket from the command line using
-[s3cmd](https://www.archlinux.org/packages/community/any/s3cmd/):
-
-	$ s3cmd mb s3://example # aws s3 mb s3://example
-	$ git fsck
+	$ export BUCKET=YOUR_BUCKET_NAME
+	$ s3cmd mb s3://$BUCKET # aws s3 mb s3://$BUCKET
 	$ git update-server-info
 	$ test -d .git && cd .git
-	$ s3cmd -P sync . s3://example # aws s3 sync --acl public-read . s3://example
-	$ git clone https://example.s3.amazonaws.com
+	$ s3cmd -P sync . s3://$BUCKET # aws s3 sync --acl public-read . s3://$BUCKET
+	$ aws s3api put-bucket-acl --bucket $BUCKET --acl public-read
+	$ git clone https://$BUCKET.s3.amazonaws.com
 
-To be safer, you should probably use [git bundle](http://stackoverflow.com/a/34593391/4534)
+To be safer, you should probably use [git bundle](http://stackoverflow.com/a/34593391/4534).

better instructions
diff --git a/e/14002.mdwn b/e/14002.mdwn
index 8ae9443..3945f55 100644
--- a/e/14002.mdwn
+++ b/e/14002.mdwn
@@ -3,7 +3,11 @@
 Lets create a bucket from the command line using
 [s3cmd](https://www.archlinux.org/packages/community/any/s3cmd/):
 
-	$ s3cmd mb s3://example
+	$ s3cmd mb s3://example # aws s3 mb s3://example
+	$ git fsck
 	$ git update-server-info
-	$ s3cmd -P sync .git/ s3://example
-	$ s3cmd ws-create s3://example
+	$ test -d .git && cd .git
+	$ s3cmd -P sync . s3://example # aws s3 sync --acl public-read . s3://example
+	$ git clone https://example.s3.amazonaws.com
+
+To be safer, you should probably use [git bundle](http://stackoverflow.com/a/34593391/4534)

Footnote
diff --git a/blog/Wireless_AC_only_works_when_stupidly_close_to_the_AP.mdwn b/blog/Wireless_AC_only_works_when_stupidly_close_to_the_AP.mdwn
index 6482bbf..75819c0 100644
--- a/blog/Wireless_AC_only_works_when_stupidly_close_to_the_AP.mdwn
+++ b/blog/Wireless_AC_only_works_when_stupidly_close_to_the_AP.mdwn
@@ -18,3 +18,5 @@ I'd say just over 10m away through a door.
 The last test was almost 60MB/sec where my computer was right next to the AP.
 
 <img src=http://s.natalian.org/2016-01-03/x1c3-archerc7.webp alt="X1C3 iperf3 testing next to Archer C7 with OpenWRT 15.05">
+
+Update: [Opensource Wireless AC doesn't seem to exist sadly. :(](https://twitter.com/kaihendry/status/683665977758748672)

AC warning to the world
diff --git a/blog/Wireless_AC_only_works_when_stupidly_close_to_the_AP.mdwn b/blog/Wireless_AC_only_works_when_stupidly_close_to_the_AP.mdwn
new file mode 100644
index 0000000..6482bbf
--- /dev/null
+++ b/blog/Wireless_AC_only_works_when_stupidly_close_to_the_AP.mdwn
@@ -0,0 +1,20 @@
+The **Qualcomm Atheros QCA9880 802.11nac** in my [Archer C7
+v2](https://wiki.openwrt.org/toh/tp-link/tl-wdr7500) running OpenWRT 15.05 only
+seems to perform when really close to the <abbr title="Access Point">AP</abbr>.
+Btw I recommend [OpenWRT](https://openwrt.org/) over the stock TP-LINK
+firmware!! I found it faster for a start.
+
+* [overview](http://s.natalian.org/2016-01-03/overview.png)
+* [edit](http://s.natalian.org/2016-01-03/wireless-edit.png)
+
+<img src=http://s.natalian.org/2016-01-03/nuc.local_2016-01-03_wlp4s0.svg alt="ac range tests">
+
+My first test was from behind my study door, about 5m from the <abbr
+title="Access Point">AP</abbr>... ~15MB/sec
+
+Second test from my study table where it was barely usable, aka out of range.
+I'd say just over 10m away through a door.
+
+The last test was almost 60MB/sec where my computer was right next to the AP.
+
+<img src=http://s.natalian.org/2016-01-03/x1c3-archerc7.webp alt="X1C3 iperf3 testing next to Archer C7 with OpenWRT 15.05">

Less
diff --git a/templates/page.tmpl b/templates/page.tmpl
index 11aac31..88f1033 100644
--- a/templates/page.tmpl
+++ b/templates/page.tmpl
@@ -211,8 +211,6 @@ Last edited <TMPL_VAR MTIME>
 <!-- Created <TMPL_VAR CTIME> -->
 </div>
 
-<p id=feedback>Noticed an error? Something that could be better? Please tell me!</p>
-
 <fieldset>
 <legend>Advertisement</legend>
 <p>If you like this, you might like the opensource software <a href=https://webconverger.com/>Web kiosk software</a> I develop. It's very useful in public and business environments for ease of deployment and privacy.</p>
@@ -226,11 +224,6 @@ Last edited <TMPL_VAR MTIME>
 
 <TMPL_IF HTML5></article><TMPL_ELSE></div></TMPL_IF>
 
-<script>
-document.getElementById("feedback").innerHTML += " feedback" + new Date().getFullYear() + "@dabase.com";
-</script>
-
-
 <fieldset style='margin: 2em; font-family "Helvetica Neue Thin, sans-serif";' class=feedback>
 <legend>Feedback</legend>
 <form onsubmit="return feedback(this);" style="margin: 1em;" action="http://feedback.dabase.com/feedback/feedback.php" method="post">

One more tip
diff --git a/blog/Mail_from_a_VPS.mdwn b/blog/Mail_from_a_VPS.mdwn
index e64d4b5..0ac0464 100644
--- a/blog/Mail_from_a_VPS.mdwn
+++ b/blog/Mail_from_a_VPS.mdwn
@@ -4,6 +4,9 @@ Set up <https://aws.amazon.com/ses/> with a [verified
 email](http://s.natalian.org/2015-11-10/1447139043_1150x1058.png) so that it
 can send as that address and only to that address.
 
+For your sanity keep track of the AWS region you are in! I am using [Oregon aka
+us-west-2](http://s.natalian.org/2015-12-10/1449713100_1918x1058.png).
+
 I tested both **ssmtp** and **mstmp** !
 
 # /etc/ssmtp/ssmtp.conf

Better info
diff --git a/blog/Mail_from_a_VPS.mdwn b/blog/Mail_from_a_VPS.mdwn
index e5ae49b..e64d4b5 100644
--- a/blog/Mail_from_a_VPS.mdwn
+++ b/blog/Mail_from_a_VPS.mdwn
@@ -1,3 +1,5 @@
+[[!meta title="Mail from a VPS using AWS SES from sandbox mode" ]]
+
 Set up <https://aws.amazon.com/ses/> with a [verified
 email](http://s.natalian.org/2015-11-10/1447139043_1150x1058.png) so that it
 can send as that address and only to that address.
@@ -31,7 +33,7 @@ I tested both **ssmtp** and **mstmp** !
 	password secret
 	from foo@example.com
 
-# Workaround write your own /usr/sbin/sendmail
+# Making /usr/sbin/sendmail to send to your verified address only
 
 When you get:
 
@@ -41,10 +43,14 @@ When you get:
 	send-mail: server message: 554 Transaction failed: Missing final '@domain'
 	send-mail: could not send mail (account default from /etc/msmtprc)
 
+Either request for a "Limit Increase: SES Sending Limits" so that your account
+moves out of the sandbox, so you no longer need to verify recipient
+addresses... or follow the steps below to make `sendmail` to send mail to your
+only verified email:
 
 If you use msmtp:
 
-	v=foo@example.com # aws ses verified email
+	v=foo@example.com # your AWS SES verified email
 	cat | /usr/bin/msmtp -i -- $v
 
 I recommend msmtp since the revaliases is just:
@@ -52,6 +58,6 @@ I recommend msmtp since the revaliases is just:
 	# cat /etc/msmtp-aliases
 	default: foo@example.com
 
-If you use ssmtp:
+Though `ssmtp` has nicer logging. If you use ssmtp, change the binary:
 
 	cat | /usr/sbin/ssmtp -i -- $v

Fix tip
diff --git a/e/01178.mdwn b/e/01178.mdwn
index 1ee13a9..2a461d0 100644
--- a/e/01178.mdwn
+++ b/e/01178.mdwn
@@ -1,10 +1,24 @@
 [[!meta title="Ensure www-data is always able to write"]]
 
-Ensure your fs is mounted with `acl`.
+Annoyingly different distros run under different users, i.e. not `www-data`.  I test with:
+
+	<?php echo exec('whoami'); ?>
+
+For example I think Fedora use `apache` user even if you use nginx. Debian is
+better since it generally uses "www-data" across the board.
+
+Ensure your fs is mounted with `acl` in order for the `setfacl` commands to work!
 
 	 mount | grep acl
 	/dev/root on / type ext3 (rw,noatime,errors=remount-ro,acl,barrier=0,data=writeback)
 
-And to ensure www-data always has free reign:
+or check with `sudo tune2fs -l $YOUR_DISK | grep Default`
+
+Now to ensure www-data always has free reign:
+
+	# chmod g+s . # Make sure properties are inherited
+	# setfacl -R -m default:group:www-data:rwx /srv/www
+
+See <https://github.com/kaihendry/myresponder/blob/master/setup.sh> for a fully worked example.
 
-	setfacl -R -m default:group:www-data:rwx /srv/www
+Again `getfacl` get check!

markdown
diff --git a/blog/Mail_from_a_VPS.mdwn b/blog/Mail_from_a_VPS.mdwn
index 77d2868..e5ae49b 100644
--- a/blog/Mail_from_a_VPS.mdwn
+++ b/blog/Mail_from_a_VPS.mdwn
@@ -2,7 +2,7 @@ Set up <https://aws.amazon.com/ses/> with a [verified
 email](http://s.natalian.org/2015-11-10/1447139043_1150x1058.png) so that it
 can send as that address and only to that address.
 
-I tested both **ssmtp* and **mstmp** !
+I tested both **ssmtp** and **mstmp** !
 
 # /etc/ssmtp/ssmtp.conf
 

Revised
diff --git a/blog/Mail_from_a_VPS.mdwn b/blog/Mail_from_a_VPS.mdwn
index 37f6195..77d2868 100644
--- a/blog/Mail_from_a_VPS.mdwn
+++ b/blog/Mail_from_a_VPS.mdwn
@@ -1,21 +1,8 @@
-So I thought it was prude to establish reliable `mail()` functionality from my
-various servers I administer.
-
-First step was choose a name of VPS. I chose "rojak".
-
-I setup a domain alias at Fastmail.fm so I can receive the emails to that address.
-
-I then set up <https://aws.amazon.com/ses/> with a [verified
+Set up <https://aws.amazon.com/ses/> with a [verified
 email](http://s.natalian.org/2015-11-10/1447139043_1150x1058.png) so that it
-could send as that address.
+can send as that address and only to that address.
 
-Caveat: You are only able to send mail to that email address to. So for example
-if you had foo@example.com verified, you can only send to and from that
-address!
-
-So now I needed a dumb mailer. My new VPS is Fedora 23 and the usual candidates
-of [msmtp](http://msmtp.sourceforge.net/) & Debian's
-[ssmtp](https://wiki.debian.org/sSMTP).
+I tested both **ssmtp* and **mstmp** !
 
 # /etc/ssmtp/ssmtp.conf
 
@@ -30,29 +17,6 @@ of [msmtp](http://msmtp.sourceforge.net/) & Debian's
 	AuthMethod=LOGIN
 	Debug=YES
 
-	echo testing | mail -s foobar root
-	[root@rojak ssmtp]# send-mail: 554 Transaction failed: Missing final '@domain'
-
-The final line of my `/etc/aliases` says
-
-	root:           foo@example.com
-
-I can see from the debug, it correctly remapping with a test like `echo testing | mail -s postmaster abuse`, e.g.
-
-	Nov 10 07:15:37 rojak sSMTP[8509]: Remapping: "abuse" --> "root"
-	Nov 10 07:15:37 rojak sSMTP[8509]: Remapping: "root" --> "foo@example.com"
-	Nov 10 07:21:22 rojak sSMTP[8540]: RCPT TO:<foo@example.com>
-
-But still the **To** is:
-
-	Nov 10 07:15:37 rojak sSMTP[8509]: To: abuse
-
-Which is rejected my AWS SES with:
-
-	Nov 10 07:15:38 rojak sSMTP[8509]: 554 Transaction failed: Missing final '@domain'
-
-I wish AWS SES could just use **RCPT TO** here.
-
 # /etc/msmtprc
 
 	defaults
@@ -67,11 +31,27 @@ I wish AWS SES could just use **RCPT TO** here.
 	password secret
 	from foo@example.com
 
-Don't forget the `ln -s /usr/bin/msmtp /usr/sbin/sendmail`
+# Workaround write your own /usr/sbin/sendmail
 
-Sadly [msmtp](http://msmtp.sourceforge.net/) is harder to debug and suffers from the same **554 Transaction failed: Missing final '@domain'** issue.
+When you get:
+
+	[root@rojak ssmtp]# send-mail: 554 Transaction failed: Missing final '@domain'
+	ssmtp: 554 Message rejected: Email address is not verified.
 
 	send-mail: server message: 554 Transaction failed: Missing final '@domain'
 	send-mail: could not send mail (account default from /etc/msmtprc)
 
-Update: A good answer to getting mail to one address is the [sed rewrite suggestion in /usr/sbin/sendmail](http://sourceforge.net/p/msmtp/mailman/message/34627361/)
+
+If you use msmtp:
+
+	v=foo@example.com # aws ses verified email
+	cat | /usr/bin/msmtp -i -- $v
+
+I recommend msmtp since the revaliases is just:
+
+	# cat /etc/msmtp-aliases
+	default: foo@example.com
+
+If you use ssmtp:
+
+	cat | /usr/sbin/ssmtp -i -- $v

grammar
diff --git a/blog/Addressable_hostnames.mdwn b/blog/Addressable_hostnames.mdwn
index bfe27de..9df0545 100644
--- a/blog/Addressable_hostnames.mdwn
+++ b/blog/Addressable_hostnames.mdwn
@@ -12,7 +12,7 @@ it's simply `sg.dabase.com`.
 
 `X1C3` is my laptop. It's not online all the time. It's usually on a
 `192.168.x.x` address inside a NAT. To address it in a LAN it's simply
-`X1C3**.local**` since all the devices on my LAN run
+`X1C3.local` since all the devices on my LAN run
 [Avahi/Zeroconf/Bonjour/mDNS](https://wiki.archlinux.org/index.php/Avahi#Hostname_resolution)
 hostname resolution.
 You cannot connect to it from anywhere on the Internet.
@@ -21,7 +21,7 @@ to explore my Web apps off `http://X1C3.local/`.
 
 # Issues
 
-## How to test hostname that it's a remote or local machine?
+## How to test hostname if it's a remote or local machine?
 
 In shell it could be:
 

issues
diff --git a/blog/Addressable_hostnames.mdwn b/blog/Addressable_hostnames.mdwn
index d1e6312..bfe27de 100644
--- a/blog/Addressable_hostnames.mdwn
+++ b/blog/Addressable_hostnames.mdwn
@@ -12,8 +12,23 @@ it's simply `sg.dabase.com`.
 
 `X1C3` is my laptop. It's not online all the time. It's usually on a
 `192.168.x.x` address inside a NAT. To address it in a LAN it's simply
-`X1C3.local` since all the devices on my LAN run
+`X1C3**.local**` since all the devices on my LAN run
 [Avahi/Zeroconf/Bonjour/mDNS](https://wiki.archlinux.org/index.php/Avahi#Hostname_resolution)
-hostname resolution. You cannot connect to it from anywhere on the Internet.
+hostname resolution.
+You cannot connect to it from anywhere on the Internet.
 However if you are in the same LAN/vicinity/network as I am, you should be able
 to explore my Web apps off `http://X1C3.local/`.
+
+# Issues
+
+## How to test hostname that it's a remote or local machine?
+
+In shell it could be:
+
+	if test $(hostname) == $(hostname | cut -d. -f1); then echo local; else echo remote; fi
+
+## How to test mDNS is running correctly in "private infrastructure environments"?
+
+In shell it could be:
+
+	ping -c 1 $(hostname).local &>/dev/null && echo mDNS is working

domain names
diff --git a/blog/Addressable_hostnames.mdwn b/blog/Addressable_hostnames.mdwn
index de918a7..d1e6312 100644
--- a/blog/Addressable_hostnames.mdwn
+++ b/blog/Addressable_hostnames.mdwn
@@ -14,4 +14,6 @@ it's simply `sg.dabase.com`.
 `192.168.x.x` address inside a NAT. To address it in a LAN it's simply
 `X1C3.local` since all the devices on my LAN run
 [Avahi/Zeroconf/Bonjour/mDNS](https://wiki.archlinux.org/index.php/Avahi#Hostname_resolution)
-hostname resolution.
+hostname resolution. You cannot connect to it from anywhere on the Internet.
+However if you are in the same LAN/vicinity/network as I am, you should be able
+to explore my Web apps off `http://X1C3.local/`.

How to address stuff
diff --git a/blog/Addressable_hostnames.mdwn b/blog/Addressable_hostnames.mdwn
new file mode 100644
index 0000000..de918a7
--- /dev/null
+++ b/blog/Addressable_hostnames.mdwn
@@ -0,0 +1,17 @@
+Consider:
+
+	[hendry@sg ~]$ hostname
+	sg.dabase.com
+	[hendry@sg ~]$ logout
+	Connection to 128.199.115.232 closed.
+	X1C3:~$ hostname
+	X1C3
+
+`sg` is the name of my server. To address it from anywhere on the internet,
+it's simply `sg.dabase.com`.
+
+`X1C3` is my laptop. It's not online all the time. It's usually on a
+`192.168.x.x` address inside a NAT. To address it in a LAN it's simply
+`X1C3.local` since all the devices on my LAN run
+[Avahi/Zeroconf/Bonjour/mDNS](https://wiki.archlinux.org/index.php/Avahi#Hostname_resolution)
+hostname resolution.

Update with the tip
diff --git a/blog/Mail_from_a_VPS.mdwn b/blog/Mail_from_a_VPS.mdwn
index c22476d..37f6195 100644
--- a/blog/Mail_from_a_VPS.mdwn
+++ b/blog/Mail_from_a_VPS.mdwn
@@ -73,3 +73,5 @@ Sadly [msmtp](http://msmtp.sourceforge.net/) is harder to debug and suffers from
 
 	send-mail: server message: 554 Transaction failed: Missing final '@domain'
 	send-mail: could not send mail (account default from /etc/msmtprc)
+
+Update: A good answer to getting mail to one address is the [sed rewrite suggestion in /usr/sbin/sendmail](http://sourceforge.net/p/msmtp/mailman/message/34627361/)

Networking tip
diff --git a/e/12009.mdwn b/e/12009.mdwn
index 606a3f0..8d8dcd1 100644
--- a/e/12009.mdwn
+++ b/e/12009.mdwn
@@ -60,12 +60,14 @@ VPN and everything is OK.
 
 # Word about network accounting
 
-To see it's network activity, assuming your container is called "firefox" like mine:
+Assuming your container is called "firefox" like mine:
 
 	grep firefox /proc/net/dev
 	ve-firefox:  407205    2396    0    0    0     0          0         0  3732997    2814    0    0    0     0       0          0
 
-So ~4 megabytes for a non-interactive Desktop session for BBC news. Notice from
-the point of view of the host, the data was transmitted - to the container!
+So ~4 megabytes for a non-interactive Desktop session for BBC news.
+Notice from the point of view of the host, the data was transmitted -
+to the container!
 
-That's why `/proc/net/dev`'s **Receive** and **Transmit** might be flipped around.
+That's why `/proc/net/dev`'s **Receive** and **Transmit** might be
+flipped around.
diff --git a/e/18001.mdwn b/e/18001.mdwn
new file mode 100644
index 0000000..90bb216
--- /dev/null
+++ b/e/18001.mdwn
@@ -0,0 +1,13 @@
+[[!meta title="How to untag a VLAN"]]
+
+Create a test VLAN with OpenWRT like so:
+
+<img src=http://s.natalian.org/2015-11-13/1447393903_1918x1058.png>
+
+Assuming your eth0 is enp0s25 like it is on my system.
+
+	sudo ip link add link enp0s25 name eth0.1 type vlan id 1
+	sudo ip link set dev eth0.1 up
+	sudo dhcpcd eth0.1
+
+You can inspect the VLAN ID with `sudo tcpdump -n -e -vv -ttt -i enp0s25 vlan` or with [wireshark](http://s.natalian.org/2015-11-13/1447394281_1054x1058.png).
diff --git a/tips.mdwn b/tips.mdwn
index ff8fb28..6c22c4c 100644
--- a/tips.mdwn
+++ b/tips.mdwn
@@ -55,6 +55,10 @@
 ## Archlinux
 [[!inline pages="e/17* and !*/Discussion" archive="yes" rss="no" atom="no" timeformat="%F"]]
 
+## Networking
+[[!inline pages="e/18* and !*/Discussion" archive="yes" rss="no" atom="no" timeformat="%F"]]
+
+
 
 
 

Update
diff --git a/e/12009.mdwn b/e/12009.mdwn
index eae8c81..606a3f0 100644
--- a/e/12009.mdwn
+++ b/e/12009.mdwn
@@ -1,45 +1,64 @@
 [[!meta title="Running Firefox in a systemd-nspawn container"]]
 
+# Bootstrap your container
+
 Assuming you installed firefox in a container `~/containers/firefox`. On first
 run I had **Segmentation fault**s until I got [st](http://st.suckless.org/)
 working. I suspect it's something to do with fonts!
 
-# PID 1
+# Setup /etc/systemd/system/systemd-nspawn@firefox.service.d/override.conf
+
+<iframe width="560" height="315" src="https://www.youtube.com/embed/PYwFBYMBovk" frameborder="0" allowfullscreen></iframe>
 
-You can run Firefox as PID 1 with an invocation like:
+Mine looks like:
 
-	sudo systemd-nspawn --setenv=DISPLAY=:0 \
-				   --setenv=XAUTHORITY=~/.Xauthority \
-				   --bind-ro=$HOME/.Xauthority:/root/.Xauthority \
-				   --bind=/tmp/.X11-unix \
-				   -D ~/containers/firefox \
-				   firefox
+	[Service]
+	ExecStart=
+	ExecStart=/usr/bin/systemd-nspawn \
+							--bind-ro=/home/hendry/.Xauthority:/home/hendry/.Xauthority \
+							--bind=/home/hendry/.config:/home/hendry/.config \
+							--bind=/tmp/.X11-unix \
+							--bind=/dev/snd \
+							--bind=/run/user/1000/pulse:/run/user/host/pulse \
+							-D /home/hendry/containers/firefox \
+							--bind /dev/shm \
+							--bind /etc/machine-id \
+							--network-veth -b
 
-However, [Lennart warns](http://lists.freedesktop.org/archives/systemd-devel/2015-August/034006.html) that Firefox shouldn't be PID 1.
+# Setup systemd-networkd & OpenVPN
 
-# Running in a booted and isolated network interface
+I use a container networking configuration like so
+[/etc/systemd/network/80-container-host0.network](http://s.natalian.org/2015-08-26/80-container-host0.network).
 
-To isolate a container to its own networking interface you need to create a
-virtual Ethernet link ("veth") between host and container aka run with **-n**
-or **--network-veth**. However as I note in a [systemd-devel mailing list
-message](http://lists.freedesktop.org/archives/systemd-devel/2015-August/034013.html),
-you need to now **boot** the container in order to setup networking.
+My VPN configuration lives in ~/containers/firefox/etc/openvpn/uk.conf and
+is invoked by starting `openvpn@uk.service`.
 
-	sudo systemd-nspawn --bind-ro=$HOME/.Xauthority:/root/.Xauthority \
-				   --bind=/tmp/.X11-unix \
-				   -D ~/containers/firefox \
-				   --network-veth -b
+# Sound
 
-Now with the style of container instatation you need to either start
-**systemd-networkd** [manually](http://s.natalian.org/2015-08-25/bbcnews.png)
-or have enabled it beforehand.
+This is the most difficult part! After hours of trial and error, attempting to
+decipher cryptic error messages, I started pulseaudio with `--disable-shm=true`
+and things started to work!
 
-I use a container networking configuration like so
-[/etc/systemd/network/80-container-host0.network](http://s.natalian.org/2015-08-26/80-container-host0.network).
+I've tweaked `/usr/lib/systemd/user/pulseaudio.service` with that option.
+
+	sudo machinectl shell hendry@firefox --setenv=DISPLAY=:0 --setenv=PULSE_SERVER=unix:/run/user/host/pulse/native
+
+Note that my $USER is `hendry` which matches an account created also called
+`hendry` in the container. This is the only way I have figured out how to get
+pulseaudio & sound working!!
+
+Firefox fails wtih `ALSA lib confmisc.c:768:(parse_card) cannot find card '0'`,
+but I've found Chromium to work.
+
+# TODO
+
+This setup needs work. Especially the sound part is very cumbersome. Why is it
+so hard to share video/sound devices? FFS!
 
-Now run Firefox as [Lennart suggests](http://lists.freedesktop.org/archives/systemd-devel/2015-August/034014.html) with:
+OpenVPN is a but clumsy in the sense there is no way to quickly tell I'm on the
+VPN and everything is OK.
 
-	sudo systemd-run -M firefox --setenv=DISPLAY=:0 firefox
+# Word about network accounting
 
 To see it's network activity, assuming your container is called "firefox" like mine:
 

Mail from a VPS
diff --git a/blog/Mail_from_a_VPS.mdwn b/blog/Mail_from_a_VPS.mdwn
new file mode 100644
index 0000000..c22476d
--- /dev/null
+++ b/blog/Mail_from_a_VPS.mdwn
@@ -0,0 +1,75 @@
+So I thought it was prude to establish reliable `mail()` functionality from my
+various servers I administer.
+
+First step was choose a name of VPS. I chose "rojak".
+
+I setup a domain alias at Fastmail.fm so I can receive the emails to that address.
+
+I then set up <https://aws.amazon.com/ses/> with a [verified
+email](http://s.natalian.org/2015-11-10/1447139043_1150x1058.png) so that it
+could send as that address.
+
+Caveat: You are only able to send mail to that email address to. So for example
+if you had foo@example.com verified, you can only send to and from that
+address!
+
+So now I needed a dumb mailer. My new VPS is Fedora 23 and the usual candidates
+of [msmtp](http://msmtp.sourceforge.net/) & Debian's
+[ssmtp](https://wiki.debian.org/sSMTP).
+
+# /etc/ssmtp/ssmtp.conf
+
+	root=foo@example.com
+	FromLineOverride=NO
+	mailhub=email-smtp.us-west-2.amazonaws.com:587
+	AuthUser=AKIAINSHZHYMQYHXD4FQ
+	AuthPass=secret
+	UseSTARTTLS=yes
+	UseTLS=YES
+	TLS_CA_File=/etc/pki/tls/certs/ca-bundle.crt
+	AuthMethod=LOGIN
+	Debug=YES
+
+	echo testing | mail -s foobar root
+	[root@rojak ssmtp]# send-mail: 554 Transaction failed: Missing final '@domain'
+
+The final line of my `/etc/aliases` says
+
+	root:           foo@example.com
+
+I can see from the debug, it correctly remapping with a test like `echo testing | mail -s postmaster abuse`, e.g.
+
+	Nov 10 07:15:37 rojak sSMTP[8509]: Remapping: "abuse" --> "root"
+	Nov 10 07:15:37 rojak sSMTP[8509]: Remapping: "root" --> "foo@example.com"
+	Nov 10 07:21:22 rojak sSMTP[8540]: RCPT TO:<foo@example.com>
+
+But still the **To** is:
+
+	Nov 10 07:15:37 rojak sSMTP[8509]: To: abuse
+
+Which is rejected my AWS SES with:
+
+	Nov 10 07:15:38 rojak sSMTP[8509]: 554 Transaction failed: Missing final '@domain'
+
+I wish AWS SES could just use **RCPT TO** here.
+
+# /etc/msmtprc
+
+	defaults
+	tls on
+	tls_starttls on
+	tls_trust_file /etc/ssl/certs/ca-bundle.crt
+	account default
+	host email-smtp.us-west-2.amazonaws.com
+	port 587
+	auth on
+	user AKIAINSHZHYMQYHXD4FQ
+	password secret
+	from foo@example.com
+
+Don't forget the `ln -s /usr/bin/msmtp /usr/sbin/sendmail`
+
+Sadly [msmtp](http://msmtp.sourceforge.net/) is harder to debug and suffers from the same **554 Transaction failed: Missing final '@domain'** issue.
+
+	send-mail: server message: 554 Transaction failed: Missing final '@domain'
+	send-mail: could not send mail (account default from /etc/msmtprc)

Update
diff --git a/blog/7_days_to_leave_Hiveage.mdwn b/blog/7_days_to_leave_Hiveage.mdwn
index 787e5f9..ce67cfe 100644
--- a/blog/7_days_to_leave_Hiveage.mdwn
+++ b/blog/7_days_to_leave_Hiveage.mdwn
@@ -1,3 +1,5 @@
+Update: [Wrote my own billing platform](https://www.youtube.com/watch?v=PPL1C5TmGvY)
+
 I'm on holiday, but I need to leave Hiveage since I'm been told to leave the
 service in this [email exchange](http://s.natalian.org/2015-06-17/7days.pdf),
 after posting [[Hiveage_grievances]]. :(

Final words
diff --git a/blog/Web_IRC_logger.mdwn b/blog/Web_IRC_logger.mdwn
index 0cf837f..6355764 100644
--- a/blog/Web_IRC_logger.mdwn
+++ b/blog/Web_IRC_logger.mdwn
@@ -44,7 +44,7 @@ Of course you would need to modify this for your IRC channel and $USER too.
 
 	[Service]
 	ConditionPathExists=/home/hendry/irc/irc.freenode.net/#hackerspacesg/out
-	# This is important for it to find it's template
+	# This is important for it to find its template
 	WorkingDirectory=/home/hendry/tmp/gotail
 	ExecStart=/home/hendry/tmp/gotail/gotail /home/hendry/irc/irc.freenode.net/#hackerspacesg/out
 	Restart=on-failure
@@ -52,3 +52,6 @@ Of course you would need to modify this for your IRC channel and $USER too.
 
 	[Install]
 	WantedBy=multi-user.target
+
+Edit & test them out (start and status systemctl cycles!) and once you are
+happy, the final test is to `sudo systemctl enable ii gotail` and reboot!

More explaining
diff --git a/blog/Web_IRC_logger.mdwn b/blog/Web_IRC_logger.mdwn
index 8426853..0cf837f 100644
--- a/blog/Web_IRC_logger.mdwn
+++ b/blog/Web_IRC_logger.mdwn
@@ -1,13 +1,13 @@
 Upon <http://irc.dabase.com/> I log the `irc://irc.freenode.net/hackerspacesg`
-IRC channel. I implemented this as **minimal** as possible! It's not an
-archive, it's just to see what's the latest chitter chatter for a [community
-sign board](http://frame.dabase.com/).
+IRC channel. I implemented this as **minimal** as possible! It's **not an
+archive**, it's just to see what's the latest _chitter chatter_ for a
+[community sign board](http://frame.dabase.com/).
 
 To do this you will need:
 
-1. an always connected VPS with systemd
-* [ii](http://tools.suckless.org/ii/) suckless IRC client
-* [gotail](https://github.com/kaihendry/gotail)
+1. an always connected VPS with systemd (I use Archlinux upon an AWS EC2 micro instance)
+* [ii suckless IRC client](http://tools.suckless.org/ii/) since it's filesystem-based!
+* [gotail](https://github.com/kaihendry/gotail) to take the file and stream it to browser using [SSE](https://en.wikipedia.org/wiki/Server-sent_events)
 
 # systemd service files
 

Feedback

Powered by Vanilla PHP feedback form