Archive for July, 2016

How to get size of folders beginning with . (period)


28 Jul

Ok quick note. Annoying because

mintyfresh glitch-e6430 # du -sh home/glitch/
3.8G home/glitch/
mintyfresh glitch-e6430 #

Ok so, let’s figure out where it’s gone to:

mintyfresh glitch-e6430 # du -sh home/glitch/* | sort -h | tail -n 5
112K home/glitch/dropbox.py
1.4M home/glitch/get-pip.py
2.6M home/glitch/recording-1059.wav
15M home/glitch/base.apk
15M home/glitch/chef-repo.tar.bz2
mintyfresh glitch-e6430 #

Well that doesn’t make sense. None of those folders add up to gigabytes of data. So it must be a hidden directory, beginning with a . (period) but how do you get the disk space?

mintyfresh glitch-e6430 # du -sh home/glitch/.*
3.8G home/glitch/.
4.0K home/glitch/..
mintyfresh glitch-e6430 #

NOPE. It’s not as easy as you’d think to get the disk space of all the directories beginning with a period.

mintyfresh glitch-e6430 # cd home/glitch/
mintyfresh glitch # du -sh ./.*
3.8G ./.
4.0K ./..
mintyfresh glitch #
mintyfresh glitch # du -sh `ls -a`
3.8G .
4.0K ..
mintyfresh glitch #

NOPE.  I tried the above because I read more than one guide on the internet that says that should have worked. Not sure what I’m doing wrong but I cobbled together my own solution and I’m posting it here so I can reference it later, that is all. I’m sure I would eventually have found an article mentioning a similar, or maybe even better, solution.

mintyfresh glitch # du -sh .[a-z0-9]* | sort -h | tail -n 5
207M .dropbox
588M .vagrant.d
679M .config
858M .wine
1.1G .thunderbird
mintyfresh glitch #

There we go. Again I’m sure there’s probably a “better” way to do it, in Linux there always is, but this works for me.

Cable Modems and Wireless Routers


25 Jul

I was recently asked about the best Cable Modem + Wireless router all in one.

I am wholeheartedly against combination devices, in client’s homes (I used to do on site upgrades and repairs) I’ve seen the modem part go bad but the router work and vice versa. Also there are a LOT of good wireless routers out there that don’t offer an “integrated modem” option, and so you’d be missing out on some amazing wireless routers and/or the easy ability to easily upgrade in the future. Think of it like a T.V/VCR combination device, sure you can do it, and for a select few it may be an ok solution, but the majority of people are going to want to separate the devices.

Since I did about an hour of research, ’cause I’m a geek like that, I decided to drop the information here.

General Rule of Thumb

The reason I keep mentioning the date of this article is because I’m absolutely positive that within the next 6 months it will become outdated. But the general rule of thumb you can take from above is Motorola/Arris builds solid modems, and Asus builds solid wireless routers. Figure out your budget and devote $45-$90 to the modem, find the best motorola you can get in that price range. Then devote the rest to a wireless router, find the best Asus wireless router you can get in the $45+ price range (and make sure it’s supported by Merlin, even if you don’t use the Merlin software, it’s a good indication of well rounded, popular, community supported, Asus wireless routers. They won’t deal with weird one off models or low end devices with subpar performance, etc. If Merlin supports the router it’s a good indication it’s a good router overall).

 

Best Overall Budget Combination

The best all-around cable modem + wireless router combination IMHO (as of July 2016) is the Motorola/Arris SB6141 Cable Modem + Asus RT-N66U wireless router.

Modem

The SB6141 has 8 channels down and 4 channels up.Each channel supports 42.88 Mbit/s. so with this 8×4 configuration on this modem it is theoretically able to download up to 343 Mbit/s and upload up to 171 Mbit/s.  That doesn’t mean you’ll get that speed, you get what you pay for. That’s just what this modem is capable of doing.  So unless you pay Comcast/Suddenlink/etc. for more than 343 Mbit/s, you’ll be able to use this modem for years to come.

I found this modem for $40 + $3 shipping on Amazon used. New it’d run you something like $60 shipped. I don’t see any harm in a used modem, they don’t slow down over time or suffer any ill effects with age so if you can save money by going used, do it.

Wireless Router

802.11ac is the new hotness right now, but a lot of wireless devices don’t yet support it; they will in a year or two. My recommendation in general is to go with a cheapish, but top notch, 802.11n wireless router and in a year or so upgrade to a 802.11ac wireless router.  And that’s why the Asus RT-N66U is my choice. Now keep in mind this router came out in 2012/2013 but it’s gotten nothing but positive reviews. I personally prefer premium Asus Wireless Routers. There are cheapo Asus wireless routers not worth the time of day, avoid them. In general I look to the Merlin firmware to help decide on that most popular and most supported wireless routers.  Merlin is my favorite aftermarket firmware for Asus routers. They take the original Asus firmware and enhance it even more, adding more features and such, but it’s completely based on the Asus firmware and doesn’t require any weird or permanent modifications to the router. You simply flash the merlin firmware and you get all these new features, and if you don’t like it you can always flash the original firmware and you’re back to stock. But what this also tells me is that even after Asus stops releasing updates for the wireless router (they’re still actively updating/support the RT-N66U as of July 2016) you may be able to find aftermarket firmware to give new features to this old device.  Anyways

At this time, the supported devices for Merlin are:

  • RT-N66U
  • RT-AC56U
  • RT-AC66U
  • RT-AC68U & RT-AC68P (including revision C1)
  • RT-AC87U
  • RT-AC3200
  • RT-AC88U
  • RT-AC3100
  • RT-AC5300
  • RT-AC1900 & RT-AC1900P

It looks like Merlin has gone mostly AC, but they did choose to continue supporting the latest/last 802.11n router Asus made, the RT-N66U. So I chose this wireless router based on reviews and the fact that the community is still supporting it with advanced features and even bug fixes and Asus is still supporting it.

I found the router as cheap as $45 used and about $60 new. Personally I’ve used and abused the RT-AC68U wireless router for almost 2 years so far and in general it’s been a solid performer. I’ve got no doubt the RT-N66U will also be a solid wireless router for the masses.

Conclusion

So for $90-$120 you can get a SB6141 and RT-N66U combination and you’d be set for a while. In a year or two you may want to upgrade the wireless router but it’s up to you, if you don’t find you’re having any speed/performance issues and everything is great, you don’t have to.

Top of the Line Combination

That would be the Motorola/Arris SB6190 modem + Asus RT-AC5300

Modem

Ok so if you have money to burn and want “top of the line”, the best of the best, you could get the SB6190 modem for about $125 as of July 2016. That modem offers an insane 32 channels down and 8 channels up. So theoretically it can download up to 1.4 Gbit/s and upload up to 262 Mbit/s. That’s insanely awesome. My only concern is this thing is very expensive and it’s still just a DOCSIS 3.0 modem; with DOCSIS 3.1 right around the corner I’d rather spend that big money on one of those modems (once they become more widely available).  If you absolutely want a killer setup right now you could go with the SB6141 mentioned above for $45-$60 or you could compromise and get the SB6183 for about $90 which offers a 16×4 configuration. But if money’s no object, the SB6190 is the king at 32×8, but you’ll need a serious internet plan to even come close to seeing its speeds.

Don’t go and get this $120 modem if you have a 150 mbps connection, you would see zero internet performance difference between the SB6190 and the SB6183. I’d be willing to bet you may not even see a performance difference between the SB6190 ($125) and the SB6141 ($60).

If you have a 300 mbps connection you would most likely see a difference between the SB6141 ($60) and the SB6183 ($90) but I’d still bet you wouldn’t see a difference between the SB6183 ($90) and the SB6190 ($125).

Wireless Router

Well that would be, as of this writing (July of 2016) the Asus RT-AC5300. That’s a $380 wireless router, it’s a MONSTER of a wireless router. And yes, Merlin firmware supports it which makes it that much more insane.

  • Tri-band (dual 5 GHz, single 2.4 GHz) with the latest 802.11ac 4×4 technology for maximum throughput (5334 Mbps) and coverage (up to 5,000 sq. ft.)
  • MU-MIMO technology enables multiple compatible clients to connect at each client’s respective maximum speed
  • Built-in access to WTFast Gamers Private Network (GPN) of route-optimized servers ensures low, stable ping times for gaming
  • AiProtection Powered by Trend Micro provides multi-stage protection from vulnerability detection to protecting sensitive data
  • ASUS Smart Connect delivers consistent bandwidth by dynamically switching devices between 2.4 and 5 GHz bands based on speed, load and signal strength
  • ASUS Ranked “Highest Customer Satisfaction with Wireless Routers in the U.S.”- J.D. Power

Let’s just say this thing is an overkill and then some. I don’t even know what clients would support this beast, you’d have to upgrade the wireless cards in every device to even come close to using this things capabilities. No laptop (as of right now) has a built in wireless card that’s capable of fully utilizing this router.  It’s like drinking from a firehose. Anyways, if you want the best you could go with this, you’d be set for the next 4-6 years easy.

Conclusion

If you pay Cox/Charter/Comcast/Suddenlink for more than 500mbps and you have $505 or so laying around then go with the Arris SB6190 modem + Asus RT-AC5300 wireless router. It’s a truly magnificent combination of hardware.

Amazon AWS EBS – magnetic vs sc1 (cold storage) vs st1 (throughput optimized)


18 Jul

Amazon AWS EBS magnetic/sc1/st1

So I intend to install Graylog on a AWS EC2 instance (aka virtual server), I prefer Graylog over ELK, and need a storage mount for the volumes. My root partition, / , is just a 50gb general purpose SSD volume (gp2) but when I went to add a 500gb magnetic volume I was surprised to find other options like ST1 and SC1.  In us-east-1 anyways, it’s $0.05 a gb for magnetic, $0.045 for throughput optimized (st1) and $0.025 for cold storage (sc1). So, which one is faster?   If you look at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html you’re lead to believe that both SC1 and ST1 are faster than magnetic, and since they’re cheaper, why not just use it? However…

But what they don’t tell you is that you have to have a MINIMUM of 500gb to even use ST1 or SC1, you can create a 150gb magnetic EBS volume if you want and it’d work out to be much cheaper.

Additionally, the speeds they show you, leading you to think ST1/SC1 is superior, are “Max” speeds, which to me means you have to provision the maximum amount of space. Remember, they usually give you more speed the more storage you add. So if you wanted a 500gb volume, that’s bare minimum for SC1/ST1, so you may not get “max” speeds they show. Additionally, magnetic volumes are 1gb-1000gb, so if you provision 500gb magnetic you’d get about half the maximum performance. So this post is to answer a question that was driving me nuts, is half the magnetic volume performance better than the minimum SC1 or ST1 performance? So here we go, I’ll be using “sysbench” to benchmark a ext4 formatted 500gb magnetic, st1 and sc1 EBS volume to learn which is “fastest”.  Now don’t get me wrong, I know SSD is the fastest, but I just need cheap storage that’s reasonably fast to store and retrieve log files. According to Amazon, ST1 is perfect for this. It’s also worth noting that ST1 and SC1 cannot be boot volumes, only SSD and Magnetic can be boot volumes, so if you need an EC2 instance for pure EBS storage you’d need a 8gb SSD volume for the OS itself and then you can add ST1/SC1/Magnetic to it.

Create Test Files

Ok I’m going to create 10gb of “test” files, sine this EC2 instance only has 4gb of memory it’s guaranteed to not be using memory cache. I like to use “time” when I create these files to see how long it took to create 10gb of random test files, it should give me a general idea of what to expect in terms of performance.

Magnetic

[[email protected] magnetic]# time sysbench --test=fileio --file-total-size=10G prepare
sysbench 0.4.12: multi-threaded system evaluation benchmark
128 files, 81920Kb each, 10240Mb total
Creating files for the test...

real 6m36.685s
user 0m0.029s
sys 0m8.231s

Cold Storage (SC1)

[[email protected] sc1]# time sysbench --test=fileio --file-total-size=10G prepare
sysbench 0.4.12: multi-threaded system evaluation benchmark

128 files, 81920Kb each, 10240Mb total
Creating files for the test...

real 4m10.453s
user 0m0.039s
sys 0m7.958s

Throughput Optimized (ST1)

[[email protected] st1]# time sysbench --test=fileio --file-total-size=10G prepare
sysbench 0.4.12: multi-threaded system evaluation benchmark

128 files, 81920Kb each, 10240Mb total
Creating files for the test...

real 2m56.582s
user 0m0.025s
sys 0m8.137s

Initial Conclusion

It looks like ST1 is the fastest, followed by SC1 and then Magnetic. Interesting…

Random Read/Write for 5 minutes

Ok time to run for 5 minutes doing Random Read/Write’s. I chose 5 minutes because I wanted the disks to really spin up and begin processing. Sometimes AWS/Amazon likes to burst so I thought 5 minutes would show a good overall performance, whether it burst in the beginning and slowed down or if it started slowly and then ramped up in performance, I wanted the 5 minute average. After all, log files could be thrown at this volume constantly. The results won’t mean much unless you compare them with other results. Pay closest attention to the line I bold, that’s the line that will be the most important one to compare.

Magnetic

[[email protected] magnetic]# sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw \
> --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 28252 Read, 18834 Write, 60160 Other = 107246 Total
Read 441.44Mb Written 294.28Mb Total transferred 735.72Mb (2.4523Mb/sec)
 156.95 Requests/sec executed

Test execution summary:
 total time: 300.0130s
 total number of events: 47086
 total time taken by event execution: 264.3029
 per-request statistics:
 min: 0.00ms
 avg: 5.61ms
 max: 224.96ms
 approx. 95 percentile: 17.49ms

Threads fairness:
 events (avg/stddev): 47086.0000/0.00
 execution time (avg/stddev): 264.3029/0.00

Cold Storage

[[email protected] sc1]# sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 7419 Read, 4946 Write, 15744 Other = 28109 Total
Read 115.92Mb Written 77.281Mb Total transferred 193.2Mb (659.42Kb/sec)
 41.21 Requests/sec executed

Test execution summary:
 total time: 300.0208s
 total number of events: 12365
 total time taken by event execution: 179.8208
 per-request statistics:
 min: 0.00ms
 avg: 14.54ms
 max: 529.72ms
 approx. 95 percentile: 38.78ms

Threads fairness:
 events (avg/stddev): 12365.0000/0.00
 execution time (avg/stddev): 179.8208/0.00

Throughput Optimized

[[email protected] st1]# sleep 180 && sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed: 25920 Read, 17280 Write, 55195 Other = 98395 Total
Read 405Mb Written 270Mb Total transferred 675Mb (2.25Mb/sec)
 144.00 Requests/sec executed

Test execution summary:
 total time: 300.0024s
 total number of events: 43200
 total time taken by event execution: 189.2417
 per-request statistics:
 min: 0.00ms
 avg: 4.38ms
 max: 334.78ms
 approx. 95 percentile: 15.43ms

Threads fairness:
 events (avg/stddev): 43200.0000/0.00
 execution time (avg/stddev): 189.2417/0.00

Test Conclusion

Interesting again. I thought the results would mirror the test file creation results but it looks like Magnetic outperformed both SC1 and ST1. However ST1 came fairly close to magnetic, maybe close enough that the performance could be considered comparable or identical. Considering ST1 created the test files in half the time I’m thinking magnetic is the best at reads, but ST1 is the best at writes and SC1 is kind of in between. So let’s move on. 

Read Tests

Ok so I’m going to do 2 read tests now, and only do them for 180 seconds (3 minutes). I’ll first do a sequential read test for 3 minutes, and then a random read test for 3 minutes. This is usually the fastest a disk or network connection can perform, it’s very easy on the disk drive and so you may find the bottleneck is the disk controller or the network connection and not the disk.

Magnetic

Sequential Read: Read 10Gb  Written 0b  Total transferred 10Gb  (62.852Mb/sec)
Random Read: Read 461.53Mb  Written 0b  Total transferred 461.53Mb  (2.564Mb/sec)

Cold Storage (SC1)

Sequential Read: Read 7.1055Gb  Written 0b  Total transferred 7.1055Gb  (40.403Mb/sec)
Random Read: Read 155.95Mb  Written 0b  Total transferred 155.95Mb  (887.18Kb/sec)

Throughput Optimized (ST1)

Sequential Read: Read 10Gb  Written 0b  Total transferred 10Gb  (62.059Mb/sec)
Random Read: Read 479.41Mb  Written 0b  Total transferred 479.41Mb  (2.6634Mb/sec)

Test Conclusion

Well it looks like cold storage read performance sucks. Magnetic and ST1 are neck and neck. I did another 90 second test run on sequential reads on Magnetic and it came in at 64.5 Mb/sec so it’s not “definitive” that ST1 is slightly faster. I’d say they’re pretty much identical. Since Magnetic is $0.05 per gigabyte and ST1 is $0.045 per gigabyte, I’d be leaning towards ST1 and my test file creation test in the beginning indicated ST1 was faster at writes, but let’s do some actual write tests.  

Write Tests

Ok so I’m going to do 2 write tests now, and only do them for 180 seconds (3 minutes). I’ll first do a sequential write test for 3 minutes, and then a random write test for 3 minutes. Since I plan to use this storage mostly for writing logs, I’m interested to see which disk can handle writes the best.

When people talk about sequential vs random writes to a file, they’re generally drawing a distinction between writing without intermediate seeks (“sequential”) vs. a pattern of seek-write-seek-write-seek-write, etc. (“random”).  When you write two blocks that are next to each-other on disk, you have a sequential write.  When you write two blocks that are located far away from each other on disk (so the magnetic disk has to seek, move the head, to the new location), you have random writes.

Log file writes are generally sequential. If you share the disks with anything else (like a database, backups, the operating system, etc.) then the write performance could suffer since the disk has to seek to the other location to do the write and then go back to where the log file is.  Since this is a secondary EBS volume used mostly for log files, we should be ok.

Magnetic

Sequential Write: Read 0b  Written 6.0571Gb  Total transferred 6.0571Gb  (34.457Mb/sec)
Random Write: Read 0b  Written 792.19Mb  Total transferred 792.19Mb  (4.4009Mb/sec)

Cold Storage (SC1)

Sequential Write: Read 0b  Written 8.0362Gb  Total transferred 8.0362Gb  (45.716Mb/sec)
Random Write: Read 0b  Written 92.188Mb  Total transferred 92.188Mb  (524.01Kb/sec)

Throughput Optimized (ST1)

Sequential Write: Read 0b  Written 10Gb  Total transferred 10Gb  (60.554Mb/sec)
Random Write: Read 0b  Written 378.12Mb  Total transferred 378.12Mb  (2.1006Mb/sec)

Test Conclusion

I was surprised SC1 was faster at sequential writes than Magnetic, but then I remember the test file creation times, how SC1 did outperform magnetic. Considering SC1 is half the price of Magnetic, that’s kind of impressive. But then random write performance was horrific. It looks like ST1 sequential write performance though is king, by a wide margin, but it came at the cost of random write performance (in which Magnetic is king).
If you need well rounded write performance magnetic is your only option. I guess this is why magnetic EBS volumes can be used as a boot volume unlike SC1 and ST1. 

Conclusion

Ok so it looks like ST1 is the way to go. For EBS, even bottom-of-the-barrel performance of ST1 beats the middle-of-the-road performance of Magnetic; I guess this goes in line with Amazon saying ST1 is optimized for databases or log’s, etc.  And since it is a tad cheaper than magnetic, I’d say we found a winner. For logs I’m going with a ST1 EBS volume. However if you need less than 500gb of storage, magnetic is really your only option and it’s not a “horrible” option, it’s well rounded. But for 500gb or more, go ST1. Maybe I’ll get bored one day and benchmark a 150gb magnetic volume to see how much performance is actually lost vs a 500gb magnetic volume.

SC1’s read performance sucks all around, it’s random write performance is junk as well but its sequential write performance is surprisingly decent.  I guess SC1 would be good for backups and archives and such, which kind of makes sense since that’s what Amazon advertises it for, infrequently accessed cold storage, ie backups and archives. Impressive Amazon is able to optimize things to that level.  As a side note, for backups/cold storage, in terms of performance and costs you have SC1 > S3 > Glacier so I’m thinking a good AWS backup strategy would to be place your immediate backups onto a SC1 EBS volume, then after 4 weeks move it to an S3 bucket and after 3 months move it to Glacier.

 

Deon's Playground

Placing whatever interests me and more