I use this as a Blog and a Wiki to share and record my thoughts and events as they happen.
Sunday, October 20, 2013
Selecting a new camera
Thursday, October 3, 2013
Setting up Tikona with D-Link DSL-2750U
Versatile, because I can use it with an ADSL/ADSL+ provider like BSNL or a 3G provider using its USB port or a provider like Tikona that provides internet based on a ethernet connection.
Also its a N300 based router so I expect it to give me good Wifi Speeds. (Haven't tested that yet.)
Anyway, to cut my ramblings short, the intent of this blog entry is to document how I got the router to work with the help of Tikona's tech support.
Here are the steps:
- Reset the router to factory settings by pressing the Reset button under the router
- Connect a LAN cable from LAN port 1 to the computer
- Go to http://192.168.1.1/
- Enter the default user name and password (admin/admin)
- In the Local Network disable the DHCP server
- Then setup the wireless by going to the Wireless Basic option
- There setup the network SID (whatever you want)
- Hit apply
- Go to Wireless Advanced
- Select security as Auto(WPA/WPA2)
- Enter the network key (whatever you want)
- Hit Apply
- Reboot the router
- Disconnect the LAN cable from the computer
- Connect the LAN cable from Tikona to the computer
- Go to the login page (1.254.254.254) but do not login
- Disconnect the LAN cable from the computer
- Connect the cable from Tikona to LAN port 1 of the router
- Connect the computer/laptop to the Wifi network setup earlier
- Go to the login page (1.254.254.254) on the computer and login
So the router has proved that it can work with Tikona too as long as the DHCP server is disabled. Hope someone finds this useful.
Update:
I found out later that I could only connect computers to the Wifi as every device needed to login. Neither my phones nor my TV could login to Tikona's web site. So I decided to go back to the plain-old WiFi router I used to have to share Tikona's connection and use this one for my ADSL connection only.
Wednesday, October 2, 2013
Making my Pentium-S 75MHz a bit torrent client
Hell, so I decided, why not make it the bit torrent client. Get Linux with Transmission and SSH installed and use it for only that. No data, so should it not work, I don't care. Just reformat and start again!
But here was the plan. Install puppy linux & transmission and I should be off!
But things are never that easy.
So, first problem I faced, that when I tried to install the Wary Puppy (5.3). It wouldn't boot from the CD to which the ISO was written! I vaguely remember that this was an issue with the kernel used for this distribution and that the system did not support this.
So what I had to do was to install a really old version of Red hat linux, in fact this was before they forked it to Fedora core and RHEL! At that time (2001) I used to subscribe to PCQuest and in March every year the magzine would provide a distribution called PCQLinux which was based on Redhat's Linux. I found that redhat linux 7.1 was the one that would boot and so I installed it!
Once the install completed and I was able to login I wanted to begin work on installing Wary puppy. But things went up in smoke! Literally. I was rebooting the system when a puff of smoke came out the back and the system refused to start working.
I suspected the power supply had conked off. So, I bravely went and unplugged everything and took the power supply off intending to take it the next day and get it replaced in the store. But I decided to run one last check before I did that. At that time I noticed that I seemed to be getting a live line on both the input sockets. Turns out the cable I was using had shorted and on closer investigation I found the place where it had shorted. It was blackened and I concluded that's where the puff of smoke came from. So my next step was to get a replacement cable, replug everything back in and test. Which I did and found it all working! DUH! When things go wrong check the cable FIRST!!
Anyway, with that sorted out and a week later when I found the time, I had the option of manually installing puppy following the instructions at these websites
I did do that and found a link http://www.murga-linux.com/puppy/viewtopic.php?t=9965 that explained how to manually install and configure puppy using Lilo.
Here's a summary of what I finally did:
- Created a directory /boot/wary5.5
- Copied the vmlinuz and initrd.gz to this folder
- Copied puppy*.sfs to /boot
- Modified /etc/lilo.conf and added the following
- # Puppy Linux
- image = /boot/wary5.5/vmlinuz
- root = /dev/ram0
- label = Puppy
- initrd = /boot/wary5.5/initrd.gz
- append = "pfix=ram"
- read-only
- Once I saved this file, I then ran lilo.
- Rebooted and puppy started working! Or so it seemed
- Next step was to free up some disk space and remove the PCQuest Linux 7.1 based on Redhat Linux 7.1
- For the record here's the partitions I set up on the 4GB disk
- 256 MB /boot
- 128 MB swap
- Rest all in a single root partition
After a while puppy would just drop to the "init" . At this point I kind of gave up and turned to the puppy linux community for help by posting this query on the forum.
After a couple more frustrating weekends trying out various suggestions, I was ready to give up.
Then one day, my PC refused to boot. A couple of times I tried and the same result, it hung after a while during the boot process. I was really disappointed and discouraged!
But come weekend, I prayed (really) and tried again, and the PC didn't boot but it gave a memory error! So, I opened up the unit, took off the four sticks of RAM, wiped them clean and reinserted them back into their respective slots. And this time the PC started up!
Next, I booted in PCQLinux 7.1, and copied across the files for puppy linux 2.0.2 opera version, modified the /etc/lilo.conf to boot using these files and rebooted the PC.
To my absolute delight, PUPPY booted up! I was thrilled. As another experiment I rebooted into PCQLinux7.1 and copied across the files for Puppy 4.1.2 and rebooted. That worked too! Now that I was on a roll, I gave Wary 5.5 another try, but this failed at the same point and refused to boot.
So now that the system was up, I started exploring it all and found that puppy202 was more responsive that puppy412. So I decided to stick with 202. Besides 202 came with transmission out of the box while 412 didn't.
Having done that, the next thing was to start a torrent. Unfortunately, the version of transmission on puppy202 didn't recognize magnet URL schemes. The next hurdle was to find a way to convert magnet URLs to .torrent files. Here I turned to Google and found the following link which allowed just that. So I converted a test URL to a torrent and tried to do the download. The sample I used was converted to the following torrent file.
I don't remember the result nor the attempts to get this to work.
However in all my searches I couldn't find software compatible with puppy202 so I decided to continue with puppy4.1.2 and I found all the software I wanted off the shelf (kind of).
I also found this site which provided me options to download pet files for transmission. I had earlier tried to installing transmission downloaded from Barry's page here. This worked but was a version of transmission that didn't work with magnet URLs. The former site's 4th download had everything statically linked or so I thought. There was a dependency missing, viz., libfio-2.0.so which I found here. But I continued to still get errors as described here but I realized I could still run transmission in the CLI mode using the following commands
- transmission-cli or transmission-daemon
- transmission-remote
- Using the web browser once transmission is started by going to http://localhost:9091/transmission
With that I was almost at the end of my journey. Unfortunately, the journey never ends. I found that X forwarding on the ssh server wouldn't work and there was no way to login to the 'console' of puppy. I also wanted to get LILO setup on puppy so I could get rid of the PCQuest installation. Alternatively I need to get grub installed. So my setup continues...
For VNC and viewing the console I used X11VNC which I found here. (Plain old vnc is available from here but does not work on the display 0, i.e., the main console). Installing server and client was simple enough. Started x11vncServer from the menu, and chose to start it every time the system boots up. Then few more screens (enter vncserver password, etc.) and it was all started.
From another system (my laptop) I tried to connect vncviewer to the
Finally I also forwarded the transmission web port (-L9091:localhost:9091) to enable access the transmission web interface remotely. I also created a symlink to /root/.config/tranmission-cli as /root/.config/transmission-daemon which allowed me to start a download using transmission-cli and then continue using transmission-daemon.
And finally to get everything started up automatically on a reboot I wrote a small script and invoked it /etc/rc.d/rc.local. rc.local made a call to the script startTransmission.sh which contains the following lines of code:
Update:
One more useful link to remember is this one which helps to autostart GUI applicaitons on different desktops. If I ever need it this will be good to remember.
Update:
While searching for alternatives to allow a central file server, I found this link which claims to be a tool to remotely login into puppy. I've not tried it out, but it maybe a useful one to try
Tuesday, January 22, 2013
Installing/Upgrading to Fedora 18
I was expecting a simple upgrade and it was that. Quick, simple and relatively easy.
But there were a few bits of customization I had to do. I'm using the rest of this post to document the things I discovered and did
- Quite obviously all the games I had installed on this had to be re-installed. And I had to copy across a number of repositories for yum too.
- Adobe repository for flash player
- rpm fusion for all the non-free stuff
- plex repository for plex media server to stream multimedia to my TV
- The installation didn't allow me to choose my hostname and domain so I got stuck with localhost.localdomain. Changing it required me to use the command hostnamectl which made it permanent across reboots. Source: http://docs.fedoraproject.org/en-US/Fedora/18/html/Release_Notes/sect-Release_Notes-Changes_for_Sysadmin.html
- On another system I wanted to run Preupgrade.But FC18 has stopped support. Instead you have to use a tool called fedup. More details here: https://fedoraproject.org/wiki/FedUp. Some useful hints for update also available at https://fedoraproject.org/wiki/Upgrading_Fedora_using_yum
- Plexmediaserver refused to work. Looking over the logs I realized that SE was preventing rsync from writing into the plugins directory. So, I had to add a policy to allow this using a few commands one of which was audit2allow. However this command wasn't installed by default in FC18 and I had to install it using yum install /usr/bin/audit2allow. Source: http://danwalsh.livejournal.com/61710.html
- To get Plexmedia to work finally I had to run the following commands:
- grep rsync /var/log/audit/audit.log | audit2allow -m mypol
- semodule -i mypol.pp
- systemctl stop plex
- cd /var/lib/plexmediaserver/Library/Application Support
- rm -rf Plex\ Media\ Server
- systemctl start plex
- systemctl status plex
- The above command highlights any errors. I saw a lot of failures with rsync and hence started fiddling around with the SE Linux policies.
- I had to repeat the commands about 6 times and it still didn't work
- Finally, I gave up and disabled SELinux. (in /etc/sysconfig/selinux) I'm sure there is another way but I was running out of time and patience.
- Rebooted and plex started up. So its confirmed that SE Linux was preventing rsync.
- Good links on plex adminsitration
- http://wiki.plexapp.com/index.php/PlexNine_PMS_TipsTricks#Linux
- http://wiki.plexapp.com/index.php/PMS
- http://wiki.plexapp.com/index.php/PlexNine_PMS_TipsTricks#Plex_Media_Server_Tips
- With that success I then tried to upgrade another existing FC17 system using Fedora Update, aka fedup, to upgrade the second system to FC18.
- This command line utility is simple too:
- fedup --network 18 --debuglog fedupdebug.log
- This was for a network update, i.e., the latest versions of the packages I had installed were downloaded and then once I rebooted there was an option to Upgrade.
- The only issue I faced was that I kept getting a no host found error when downloading the package perl-ExtUtils-ParseXS-3.16-235.fc18.noarch.rpm.
- Finally I had to copy this off the ISO I had downloaded to use for the fresh install on the first system.
- Once that hurdle was crossed the remaining packages were downloaded and the upgrade was smooth.
- I'm now blogging this using the upgraded FC-18.
- One weird issue I'm facing right now is in the browser (Opera) when writing this blog, after typing about 3-4 characters, the cursor moves to the start of the line, types a j and then starts typing normally for the next 3-4 characters.
- I'm hoping this is some temporary issue and a reboot resolves it.
- If not, I'll update this post with the details.
Setting up FC after a fresh install
- After the fresh install do a 'yum -y update'. This will ensure every thing's up-to-date
- Add my user name to the sudoers file. Edit the /etc/sudoers file and add the following line after the Defaults section. The Defaults section is a bunch of lines starting with the word Defaults. When you edit the /etc/sudoers file add the following line to it:
<username> ALL=(ALL) ALL. This will enable the sudo command with all sudo permissions to username. Not the most secure, but the most convienient. - Install synergy
- Install the rpmfusion repositories for yum using the command:
sudo yum localinstall --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-stable.noarch.rpm - Run the following commands:
- sudo yum install kdenlive
- sudo rpm -ivh adobe-release-i386-1.0-1.noarch.rpm
- sudo yum install flash-plugin nspluginwrapper.x86_64 nspluginwrapper.i686 alsa-plugins-pulseaudio.i686 libcurl.i686
- sudo yum install tiger*vnc*server*
- sudo yum -y update
Sunday, October 24, 2010
Removing Linux from a dual boot system
I've had to remove Linux on some systems which were dual bootable with Linux. (As much as I prefer Linux, sometimes Windows is the right system to have)
In the past, I'd used the windows 98 rescue disks and the command:
fdisk /mbr
This worked well for me previously, but this time I had a system without a floppy drive and was at a loss to be able to run fdisk on Windows XP. That's when I found this site and tried the recommendation and ran the following command:
fixmbr
I was intimidated at first by all the messages I saw on the screen, but went ahead boldly and ran everything. And it worked well. I now booted straight into Windows and never went through the grub login screen!
The next step was to get into windows disk manager and drop the linux (and linux swap) partitions and reformat them to NTFS.
Once done, I now have a system free of Linux.
Thursday, April 29, 2010
Improving your Code
Composed methods
Each method does one task. This makes it granular and easy to read and understand.
Within a method all operations are at the same level, functionally.
Smaller methods also are documented by the method name. In fact, working backwards will help you to identify methods that need to be refactored. After implementing the method, change the name to reflect what the method does. Instinctively, you'll identify that the method name is too long or it does not summarise the operation correctly. This means the method needs to be refactored!
Test Driven Development
Every engineer I've spoken to has heard this term and most aren't aware what it means and how to do it. The most common question I've heard is: "What's the use of unit-testing?". I'm not going to get into that discussion here, but will summarise the benefits.
- Think like a user. The code/implementation being delivered needs to be used.
- Aids in created composed methods, because smaller atomic methods are easier to test
- Improves reliability by providing an automated regression suite
FindBugs and Checkstyle, are two tools that help in reducing common bad practices, bugs and code complexity.
Good citizenship
In other words, ensure all references have the right scope. This includes:
- Scope identifiers, viz., private, default, protected, public. My recommendation is make everything private and increase scope as and when required
- variable scope, viz., block, method, field, static. My recommendation again is to declare a variable as close as possible to it's point of usage and take it higher up only if required. Another recommendation is to rather pass values around using method parameters rather than increase a variable's scope.
KISS
Keep It Stupidly Simple! That also means, choose the simplest implementation for the current requirement. Don't go about indulging in speculative development. A lot of the times, simple things are cheap to throw away. If it gets complex you probably need a framework and should look at reusing existing ones rather than building a new one. Simple code is refactor-able and easy to change hence resulting in more agile and adaptable implementations.
Refactoring is not rework
Many times I've heard the complain:
"We had to change <insert your code here> and we wasted time in refactoring. If only we had done a little up front design we would have done it right the first time!"
But this is not true. You saved time the first time you did it without having to worry about complexity. Secondly, if you followed the principal correctly, you would have implemented something simple and hence cheap to throw away. And Thirdly, who ever got it right the first time!
If you are a Linux fan, even today there are new releases of Linux and if you are a Windows fan, Windows is at version 7.x No one got it right the first time, not even Einstein. So why do you expect yourself to be perfect. You will have to change whatever you do. If you come to terms with that, then why not be simple. Don't get attached to the code. And refactor. It's part of software engineering. It's part of implementation cost. It cannot be reduced! And hence it is not waste but part of the process.
This kind of thinking is usually triggered by managers asking: "How can we improve the process and reduce waste". Managers have to ask this. They control the budget and who doesn't want the best value for money? But we as engineers have to stop viewing refactoring as waste and something to be reduced. It is part of the software engineering process!
Ask Question
As simple as this sounds a lot of the time, questions aren't asked especially to authorities. This means you question suggestions, opinions, decisions, implementations, traditions, etc., etc. Questions will either lead to a change, hopefully for the better or will reaffirm that you've got the right implementation. So what's the harm in asking them? In today's world questions are no longer an indication that you don't know something. Quite the contrary, it's an indication whether you are paying attention and an indication of how much you've understood. I've always been of the opinion that 'There are no stupid questions, only stupid answers'. The only exception is when, you've not done your homework before asking the question or weren't paying attention. In which case, apologise for your bad behaviour' but still ask the question!