Sunday, October 20, 2013

NFS related to SNMP

People might face problem with SNMP, suddenly got stop working.

If this is the case, make sure your NFS mount is working properly by seeing df -h or doing mount command in command line if you have any.

Because NFS mounting failure will result to failure with SNMP as well.

Possible solutions,

  1. Could be accessing different subnet from the machine
  2. If any firewall, disable the same
  3. Make sure SELinux is disabled

Comments would be appreciated. 

Tuesday, September 24, 2013

Linux Reverse Path Filtering

I faced a issue today, thought its a simple task but digged for 30 to 40 minutes to overcome this.

Basically I have a server which has 4 NIC port and I have to configure two NIC with different subnet, thought its a simple task, but it didn't worked.

After configuration the working NIC stopped working as well as the newly configured.

Post to this after some digging found reverse path filtering has to be disabled, and did the following,

echo "0" > /proc/sys/net/ipv4/conf/em1/rp_filter
em1 - ethernet device name

This has to be done for all NIC's which has to be configured.

To permanently add,
"echo 'net.ipv4.conf.em1.rp_filter = 0' >> /etc/sysctl.conf"
em1 - ethernet device name

Add it for loopback interface as well,
"echo 'net.ipv4.conf.lo.rp_filter = 0' >> /etc/sysctl.conf"
 Finally to take effect,
"sysctl -p"
Thanks.

Friday, September 20, 2013

Pacemaker Basic Setup with Cent OS 6.4 (64bit)

Since after struggling so many days, I accomplished basic setup of Pacemaker with Corosync in Cent OS 6.4 and following are the steps, thought of sharing...

Hope Cent OS is installed already in both the host machines. Here am using 2-node clustering.


Networking
As a first step disable the selinux and iptables
# service iptables stop
# chkconfig iptables off
# setenforce 0
# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
(to make permanent)
 Short node Names
We need to update /etc/sysconfig/network. This is what it should look like before we start.
# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=pcmk-1.example.org
GATEWAY=
However we’re not finished. The machine wont normally see the host name until about it reboots, but we can force it to update.
# source /etc/sysconfig/network
# hostname $HOSTNAME
Now you can check the machine is using correct names
# uname -n
# pcmk-1.example.org
Configure SSH
SSH is a convenient and secure way to copy files and perform commands remotely. For the purposes of this guide, we will create a key without a password (using the -N option) so that we can perform remote actions without being prompted.

Creating and Activating a new SSH Key
# ssh-keygen -t dsa -f ~/.ssh/id_dsa -N ""
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
91:09:5c:82:5a:6a:50:08:4e:b2:0c:62:de:cc:74:44 root@pcmk-1.clusterlabs.org
The key's randomart image is:
+--[ DSA 1024]----+
|==.ooEo.. |
|X O + .o o |
| * A + |
| + . |
| . S |
| |
| |
| |
| |
+-----------------+ 
# cp .ssh/id_dsa.pub .ssh/authorized_keys
Install the key on the other nodes and test that you can now run commands remotely, without being prompted
# scp -r .ssh pcmk-2.example.org:
The authenticity of host 'pcmk-2.example.org (192.168.122.102)' can't be established.
RSA key fingerprint is b1:2b:55:93:f1:d9:52:2b:0f:f2:8a:4e:ae:c6:7c:9a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pcmk-2.example.org,192.168.122.102' (RSA) to the list of known hosts.root@pcmk-2.example.org's password:
id_dsa.pub 100% 616 0.6KB/s 00:00
id_dsa 100% 672 0.7KB/s 00:00
known_hosts 100% 400 0.4KB/s 00:00
authorized_keys 100% 616 0.6KB/s 00:00
# ssh pcmk-2.example.org -- uname -n
pcmk-2.example.org
Cluster Software Installation
# wget -P /etc/yum.repos.d/ http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
# yum install -y pacemaker corosync
# yum install -y cman gfs2-utils gfs2-cluster
#yum install crmsh.x86_64
Configure Corosync
The instructions below only apply for a machine with a single NIC. If you have a more complicated setup, you should edit the configuration manually.
# export ais_port=4000
# export ais_mcast=239.255.1.1
Next we automatically determine the hosts address. By not using the full address, we make the configuration suitable to be copied to other nodes.
#export ais_addr=`ip addr | grep "inet " | tail -n 1 | awk '{print $4}' | sed s/255/0/g`
Display and verify the configuration options
# env | grep ais_
ais_mcast=239.255.1.1
ais_port=4000
ais_addr=192.168.122.0
Note: Please make sure you have multicast is enabled in your switch, if not enabled then you will face communication problem between your nodes. In my case I struggled and than changed to unicast, because multicast didn't worked for my setup. For unicast example file will be available in  this path by default "/etc/corosync/corosync.conf.example.udpu"

Once you’re happy with the chosen values, update the Corosync configuration
# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
# sed -i.bak "s/.*mcastaddr:.*/mcastaddr:\ $ais_mcast/g" /etc/corosync/corosync.conf
# sed -i.bak "s/.*mcastport:.*/mcastport:\ $ais_port/g" /etc/corosync/corosync.conf
# sed -i.bak "s/.*\tbindnetaddr:.*/bindnetaddr:\ $ais_addr/g" /etc/corosync/corosync.conf
Lastly, you’ll need to enable quorum
cat << END >> /etc/corosync/corosync.conf
quorum {
provider: corosync_votequorum
expected_votes: 2
}
END


In my case my /etc/corosync/corosync.conf file looks like the following,
#cat /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                member {
                        memberaddr: 10.30.2.98
                }
                member {
                        memberaddr: 10.30.2.99
                }
                ringnumber: 0
                bindnetaddr: 10.30.2.0
                mcastport: 4000
                ttl: 1
        }
        transport: udpu
}
logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        logfile: /var/log/cluster/corosync.log
        to_syslog: yes
        debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}
quorum {
           provider: corosync_votequorum
           expected_votes: 2
}
Propagate the configuration to other node
# for f in /etc/corosync/corosync.conf /etc/hosts; do scp $f pcmk-2.example.org:$f ; done
Verify Corosync Installation
# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
Check the cluster started correctly and that an initial membership was able to form
# grep -e "corosync.*network interface" -e "Corosync Cluster Engine" -e "Successfully read main configuration file" /var/log/messages
Aug 27 09:05:34 pcmk-1 corosync[1540]: [MAIN ] Corosync Cluster Engine ('1.1.0'): started and ready to provide service.
Aug 27 09:05:34 pcmk-1 corosync[1540]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
# grep TOTEM /var/log/messages
Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transport (UDP/IP).
Aug 27 09:05:34 pcmk-1 corosync[1540]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] The network interface [192.168.122.101] is now up.
Aug 27 09:05:35 pcmk-1 corosync[1540]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Start Corosync in other node and check for any error messages.

Verify Pacemaker Insatllation
# grep pcmk_startup /var/log/messages
Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: CRM: InitializedAug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] Logging: Initialized pcmk_startup
Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615

Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Service: 9Aug 27 09:05:35 pcmk-1 corosync[1540]: [pcmk ] info: pcmk_startup: Local hostname: pcmk-1.example.org
Now try starting Pacemaker and check the necessary processes have been started
# /etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager: [ OK ]
# grep -e pacemakerd.*get_config_opt -e pacemakerd.*start_child -e "Starting Pacemaker" /var/log/messages
Next, check for any ERRORs during startup - there shouldn’t be any.
# grep ERROR: /var/log/messages | grep -v unpack_resources
Repeat on the other node and display the cluster’s status.
# crm_mon -1
============
Last updated: Thu Aug 27 16:54:55 2009Stack: openais
Current DC: pcmk-1 - partition with quorum
Version: 1.1.5-bdd89e69ba545404d02445be1f3d72e6a203ba2f
2 Nodes configured, 2 expected votes
0 Resources configured.
============
Online: [ pcmk-1.example.org pcmk-2.example.org ]
So far we have build the basic setup, we have to add to resource, with Active/Passive or Active/Active clustering. Will be coming up in next coming posts. 

Thanks for reading, comments would be appreciated. Article taken from www.cluster.org.

Monday, August 19, 2013

Ubuntu Edge Smartphone

How the Edge compares
 Ubuntu EdgeApple iPhone 5Samsung Galaxy S4
Mobile OSDual-boots Android and Ubuntu mobileiOSAndroid
Desktop OSUbuntu DesktopNoNo
RAM4GB1GB2GB
Internal storage128GB64GB16GB
Screen720 x 1,280, 4.5 inches640 x 1,136, 4 inches1,080 x 1,920, 5 inches
ProtectionSapphire GlassCorning Gorilla GlassCorning Gorilla Glass 3
ConnectivityDual-LTE, GSMLTE, GSMLTE, GSM
SpeakersStereoMonoMono
BatterySilicon-anode Li-ionLi-ionLi-ion
Price$695$849*$750**
CPU/GPU, screen technology to be finalised before production. * Apple Store ** Best Buy

What is Ubuntu Edge?
In the car industry, Formula 1 provides a commercial testbed for cutting-edge technologies. The Ubuntu Edge project aims to do the same for the mobile phone industry -- to provide a low-volume, high-technology platform, crowdfunded by enthusiasts and mobile computing professionals. A pioneering project that accelerates the adoption of new technologies and drives them down into the mainstream.
This beautifully crafted smartphone is a proving ground for the most advanced mobile technologies on the horizon, a showpiece for true mobile innovation. And at the heart of it all is convergence: connect to any monitor and this Ubuntu phone transforms into an Ubuntu PC, with a fully integrated desktop OS and shared access to all files.
We’re fascinated by converged computing, the idea that the smartphone in your pocket can also be the brain of the PC on your desk. We’ve shaped Ubuntu so you can transition seamlessly between the two environments. Now all that’s needed is a phone that’s designed from the ground up to be a PC as well.
The Ubuntu Edge is our very own superphone, a catalyst to drive the next generation of personal computing.

Thursday, June 27, 2013

Allowing FTP Access to Files Outside the Home Directory

When we setup an FTP server software(regardless if this is proftpdvsftpd, etc.) we might face a dilemma: we want to restrict the access that ftp users will have (limited access to files normally in their own home directory) but also we want to allow them access to another folder that is normally in a different location (like development files for whatever work they are doing).
The problem is that if we configure the chroot restriction for the ftp users we will notice that as expected they will be locked in the chrooted folder (let’s say their home directory). If we try to create a symlink to the other folder they need access, this will just not allow them to change into that folder (break out the chroot) and this is very normal. To exemplify this let’s consider that I am using vsftpd and one user ftp_user. Chroot restriction is enabled on ftp accounts and his home is in/home/ftp_user. But I need to provide him access for another folder /var/www/dev/. Even though I am using here vsftpd the same concept applies to any other ftp server software.
The configurations for vsftpd are basic ones (but I will include them at the end of the post for reference). The important one here is:
chroot_local_user=YES
Of course that one solution to overcome this limitation is to disable chroot and allow the ftp users full access to all the system files. This isnot at all recommended and this little tip will show you how you can achieve this with chroot enabled. The solution to this little problem is tomount the needed directory using the –bind parameter… from the man page of mount: ”–bind Remount a subtree somewhere else (so that its contents are available in both places)”.
So we might do something like:
mkdir /home/ftp_user/www_dev
mount --bind /var/www/dev/ /home/ftp_user/www_dev
After this the ftp user will be able to see the needed files in his home directory and use them in his ftp client as if they were local files.
If you need to make this configuration permanent you can either add the mount command in some startup script or you can just include a line in /etc/fstab:
/var/www/dev  /home/ftp_user/www_dev    none    bind    0       0
I hope that you have found this tip useful in case you have a similar issue… Just for the reference here is the vsftpd configuration used (the important parameter is only the one noted abovechroot_local_users):
/etc/vsftpd.conf
listen=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
secure_chroot_dir=/var/run/vsftpd
pam_service_name=vsftpd
rsa_cert_file=/etc/ssl/certs/vsftpd.pem
An article from http://www.ducea.com/2006/07/27/allowing-ftp-access-to-files-outside-the-home-directory-chroot/

Tuesday, May 14, 2013

On Site vs Google Cloud Security: How do they compare?


Two things that we get asked about a lot is Security and Privacy in the cloud.  Some businesses are concerned about adopting a cloud approach as they think they’ll lose control of their data when it’s not stored on servers they can see and touch.  Although usually an initial concern, with the necessary due-diligence, organisations are actually considering moving to the cloud to gain the additional security benefits it offers.

Let’s take a look at some of the security concerns involved with hosting your data on-site:

  • Companies have multiple operating systems, each with different versions - each requiring different security patches
  • Most companies take 25 to 56 days on average to deploy an OS patch - meaning you are vulnerable for the time in between
  • Companies spend more than $2 billion annually on patches - when you could just benefit from the economies of scale of a cloud service provider
  • While you’re working on deploying your patch, other people are working on reverse engineering and gaining access to your environment...
That begs the question “Is my data safe on premise”?

Did you know:

  • 60% of corporate data resides unprotected on PC desktops and laptops?
  • 1-out-of-10 laptop computers will be stolen within 12 months of purchase?
  • 66% of USB thumb drive owners report losing them, over 60% with private corporate data on them
In the cloud, things are different. For the first time, consumer technology is more powerful than enterprise technology. The real security issue with cloud computing is assessing the security of your cloud vendor.
There are currently no real cloud security standards. This means that companies don’t know what it is they need to be looking out for in a solution, and different vendors end up using different technologies.
There is also no standard cloud certification. How do you benchmark the ability of each of these vendors? Most won't even let you audit them!

Let’s look at some of the security functionality that Google provides on their cloud platforms, and see how it compares to on-premise solutions:

 

Going cloud with Google

 Maintaining On-premise

Two Step Authentication 
This is a built in feature of Google Apps similar to the OTP features provided by internet banking, and can be enforced to your users based on OU. It is free and easy to setup and manage.
With On-premise, your best bet would be to use a third party tool to accomplish this (e.g Vasco OTP, costing anything from $100 - $100k) 

Apps Development Security
 
Google uses a rigorous code development process. All code is subject to review, and all projects go through a security review as well. Google have tools they use for vulnerability testing before deployment, and they are constantly refreshing the tool. 

Rigorous testing  and risk assessment need to be done on new patches by your own team (who might be multi-tasking other projects at the same time)
 
GFS = Google File System 
Google have their own files system called GFS. With GFS, files are split up and stored in multiple pieces on multiple machines. Filenames are random (they do not match content type or owner). There are hundreds of thousands of files on a single disk, and all the data is obfuscated so that it is not human readable. The algorithms uses for obfuscation changes all the time. 
There aren’t many choices, and most opt for the standard file system included with your server OS. This offers little to no security in this regard. 
Physical and Personnel Security 
Google has dozens of data centers for redundancy. These data centres are  in undisclosed locations and most are unmarked for protection. Access is allowed to authorised employees and vendors only. Some of the protections in place include: 24/7 guard coverage, Electronic key access, Access logs, Closed circuit televisions, Alarms linked to Guards stations, Internal and external patrols, Dual utility power feeds and Backup power UPS and generators. 

Your physical security is limited to what you have in your building. Not to mention the necessary requirements to  ensure your DMZ and LAN is secure.

As you can see, Google’s Security model has a lot to offer. With Google, security is a chain, and all layers matter.
The bottom line is that sure, you can maintain an on-premise solution, one that you will be able to see and touch. You are responsible for it 100% and try as hard as you want, you won't be able to maintain an on-site system as securely and efficiently as Google can theirs. Are the layers that you put in place and maintain (along with all the other IT projects you run), going to be better than the security layers offered by a specialist cloud computing provider? The answer to that question depends entirely on the network, resources and skill you have in place. If you don’t have the resource and budget to maintain the security levels required then a cloud solution could suit you perfectly.

Friday, May 10, 2013

Smartphone docked to an IP Phone....!

Nowadays smart phone is becoming more popular and in enterprise level IP Phone is also getting used in many companies.

Now... If we have a option to dock our smartphone into a IP Phone, so that we can use the IP Phone as smartphone, not all features... some features like making incoming / outgoing calls, checking SMS, here I think sending SMS would be a difficult situation with T9 keyboard. But later we can try to enhance that as well.

Also the smartphone will get charged from the IP Phone power itself.


Comments would be appreciated.....


Thursday, May 2, 2013

Seven tips for avoiding VoIP Toll Fraud

Business customers are increasingly utilising VoIP technology, and for good reason.

By integrating their telephony within an IP environment, business customers are able to save a great deal of cost on both infrastructure and telecommunications.

At the same time, they can improve their business processes and customer experience by leveraging unified communications.
While the positives of moving to an IP telephony solution far outweigh the negatives, opening your phone system up to the Internet does increase risk.
Toll fraud has been a problem for a long time, but has increased exponentially since the growth of VoIP implementation.


  1. Apply a daily toll limit with your VoIP provider
  2. Use TLS protected SIP
  3. Employ a stateful firewall
  4. Segregate your business network
  5. Encrypt your site-to-site calls
  6. Use strong passwords
  7. Do not allow generic PINs


Sunday, March 3, 2013

Asterisk WebRTC

Asterisk 11 joins WebRTC to create a revolution in real time communication platform.

Asterisk® 11 implementation of WebRTC, a JavaScript API that makes it easy to build click-to-call systems and soft-phone interfaces using nothing more than a web page.

A small article with virtual box implementation here http://nerdvittles.com/?p=5321