Kai Hendry's other blog archives

How much does it cost to run an Archlinux mirror on EC2

AWS Singapore kindly gifted http://hackerspace.sg/ with 500SGD of AWS credits.

Since the mirrors http://mirror.nus.edu.sg/ and http://download.nus.edu.sg, which are two separate competing groups from the NUS which oddly try to outdo each other in incompetence, have had several issues mirroring Archlinux in my two year experience of using either of them, I thought lets use these credits to host an Archlinux mirror!!

After much head scratching with the AWS jargon of {ebs,s3} and {hvm,paravirtual} EC2 Archlinux images, I launched an "ebs hvm" instance of m3.xlarge.

I got a nice 80GB zpool going for the mirror and everything was looking good. However, now to do the budgeting.

On demand pricing is $0.392 an hour

There is roughly 9000 hours in a year. So that's $3528. Eeeek, over budget by just 3000 dollars!

Ignoring added complexity of Spot and EBS enhancements, a one year resevered instance under "Light Utilization Reserved Instances" (I am not sure what that means) is 497 dollars! Yes!!

I'm told "Light utilization means that you will not turn it on all the time". For 1 year I would need heavy utilization!

So a m3.xlarge would be: 981 (down payment) + 24 * 365 * 0.124 = $2067.24, about 1500 dollars over budget.

Oh and bandwidth?

Well, a mirror is going to be a network whore. AWS charges for bandwidth. I tried their calculator (since I couldn't figure out what they charge per GB) with a lowball 1TB a month in and out and that costs almost 200USD.

Wow that's expensive! AWS EC2 (+ 500SGD credit) isn't suitable for an Archlinux mirror! :(

Digital Ocean quote

For a machine with at least 50GB of disk, you would need Digital Ocean's 60GB offering, with

  • 4GB / 2 CPUS
  • 60GB SSD DISK
  • 4TB TRANSFER

So that is 40USD a MONTH or 480USD a year. A lot cheaper than EC2, and bandwidth clearly priced at 2c per GB, so 1TB = 20USD IIUC.

Lessons learnt

Running a mirror is quite expensive on EC2. It's not really feasible on DO either without some free unmetered traffic.

Posted
SIGFOX

I attended an event at Hackerspace.SG Running your IoT devices on a low power, long range network, which showcased Lee Lup's slides on SIGFOX at Singtel.

My thoughts are that the 12 byte payload is not really suitable for monitoring, but more for events.

  • Door closed
  • Bin's full
  • Some significant threshold exceeded

It's not super good for monitoring since:

  • payload is too small
  • there is no accurate time
  • you can't monitor any less than 10 minute intervals

I found the people tracking / GPS use cases to be silly, since GPS needs a lot of power.

SIGFOX lends itself to well known fixed locations.

Fixed locations, such as public property that need maintenance and don't have Internet connectivity for one reason or another.

I'm thinking:

  • alerting to street lamps failing
  • alerting to flooding
  • alerting to when a bin is full & needs emptying
  • alerting when a special door/gate is opened

I like how the SIGFOX station seems to send out Github style Webhook payloads to a payload URL.

Posted

Latest tips

Working with a directories of unknown files

Using http://mywiki.wooledge.org/BashFAQ/020 as a starting point, you could:

find /tmp -type f -print0 | while IFS= read -r -d '' file
do
   echo properly escaped "$file" for doing stuff
done

However that's a bit ugly. And note that -d '' only works in bash. So none of this is "POSIX".

Another way of writing this, which works from bash 4 is using dotglob/globstar:

shopt -s dotglob  # find .FILES
shopt -s globstar # make ** recurse
for f in /tmp/**
do
    if <span class="createlink"><a href="/ikiwiki.cgi?page=_-f___36__f___38____38_____33___-L___36__f_&amp;from=e%2F13042&amp;do=create" rel="nofollow">?</a> -f &#36;f &#38;&#38; &#33; -L &#36;f </span>
    then
        echo properly escaped "$f" for doing stuff
    fi
done

Another perhaps more POSIX way is

foo () { for i in "$@"; do echo $i; done };export -f foo;find /tmp -type f -exec bash -c 'foo "$@"' - {} + | wc -l

I.e. export a script function to be executed by the -exec parameter of find, or just use a seperate script file.

Posted
Ensure www-data is always able to write

Ensure your fs is mounted with acl.

 mount | grep acl
/dev/root on / type ext3 (rw,noatime,errors=remount-ro,acl,barrier=0,data=writeback)

And to ensure www-data always has free reign:

setfacl -R -m default:group:www-data:rwx /srv/www
Posted
Xorgs version
12:04 <hendry> i'm using wheezy Xorg packages  1:7.7+3~deb7u1 and http://ix.io/d2p says X.Org X Server 1.12.4
12:04 <hendry> Release Date: 2012-08-27
12:04 <hendry> Is that right?
12:05 <jcristau> probably
12:11 <hendry> wondering why there is a mis-match with versions
12:11 <hendry> is there a newer Xorg available for wheezy? something to eek out performance with intel cards
12:12 <jcristau> there isn't a mismatch
12:12 <jcristau> and no
12:12 <hendry> 1:7.7+3~deb7u1 & 1.12.4 doesn't make sense to me ... :}
12:13 <jcristau> you can't understand that different things can have different versions?
12:20 <hendry> so what does 7.7+3~deb7u1 refer to ?
12:22 <pochu> 7.7 is the upstream version, +3 is the debian revision, and deb7u1 is the first update to Debian 7 (wheezy)
12:22 <psychon> http://www.x.org/wiki/Releases/7.7/
12:28 <jcristau> 7.7 is the base version of X.Org's X11 distribution
12:28 <jcristau> 1.12.4 is the version of the X server

http://www.x.org/wiki/Releases/7.7/

Posted
Setting a read S3 policy from the command line

Easier than logging into https://console.aws.amazon.com/s3/ since I need to get out my MFA device out everytime.

x220:/tmp$ bash allow-read.sh b3-webc
s3://b3-webc/: Policy updated

allow-read.sh is just a script to help write the policy:

x220:/tmp$ cat allow-read.sh
#!/bin/bash
test "$1" || exit
s3_bucket=$1
tmp=$(mktemp)
s3cmd ls > $tmp
if ! grep -q $s3_bucket $tmp
then
        echo Could not find bucket s3://${s3_bucket}
        cat $tmp
        exit
fi
cat <<END > $tmp
{
  "Version":"2008-10-17",
  "Statement":[{
    "Sid":"AllowPublicRead",
        "Effect":"Allow",
      "Principal": {
            "AWS": "*"
         },
      "Action":["s3:GetObject"],
      "Resource":["arn:aws:s3:::${s3_bucket}/*"
      ]
    }
  ]
}
END
s3cmd setpolicy $tmp s3://${s3_bucket}
Posted
Failed to start Verify integrity of password and group files.

Fix it by running pwck.

x220:~$ sudo pwck
user 'umurmur': directory '/home/umurmur' does not exist
pwck: no changes
x220:~$ pacman -Ss umurmur
community/umurmur 0.2.14-1
    Minimalistic Mumble server
x220:~$ sudo vim /etc/passwd
x220:~$ sudo pwck
no matching password file entry in /etc/passwd
delete line 'umurmur:!:16094::::::'? yes
pwck: the files have been updated
Posted
Feedback

Powered by Vanilla PHP feedback form