Pages

Tampilkan postingan dengan label ClearOS. Tampilkan semua postingan
Tampilkan postingan dengan label ClearOS. Tampilkan semua postingan

Rabu, 26 Juni 2013

Squid 2.7 Stable9 di ClearOS 6.3

Squid 2.7 Stable9 di ClearOS 6.3

Sekedar menyimpan catatan, maklum pelupa alias emang malas inga' inga'... :D

ini urutan langkah memasang squid 2.7s9 patch dengan youtube non-range di ClearOS 6.3 mode Gateway. cara ini lebih cepat, kalau cara panjangnya ada di forum clearos indonesia..

1. update, stop, buang squid lama
    yum update
    reboot

2. copy semua file yang ada di folder squid ke folder dummy (ntar dipakai lagi tuh)
3. download dan copy file dewa ke root, lalu install
    service squid stop
    yum remove squid
    yum localinstall --nogpgcheck squid-2.7.STABLE9.xx.xxx.best
4. install app webconfig
    yum install app-web-proxy (cos6x) 
    atau
    yum install app-squid (cos5x)
5. copy squid.conf, storeurl dan kawan2nya ke folder squid
6. copy semua file yang ada di folder dummy ke folder /etc/squid, kecuali squid.conf atau storeurl.pl (klo ada sih)
  7. buat folder cache seperlunya (klo blm ada), rubah hak akses
    mkdir /cache1
    chown squid:squid /cache1

  8. rubah hak akses squid.conf, /cache squid:root 0640 kemudian sesuaikan isinya (IP)
    chmod 777 /etc/squid/storeurl.pl
    chown squid:root /etc/squid/squid.conf
    chmod 0640 /etc/squid/squid.conf

  9. sesuaikan isinya squid.conf, (IP, DP)
10. cek squid, jika tidak ada pesan error, lanjutkan buat swap directory, lalu jalankan
      squid -k parse
      squid -z
      service squid start 

      service squid restart
11. Agar squid bisa jalan secara otomatis saat proses booting, tambahkan diakhir baris,/etc/rc.local/
       echo 1024 65535 > /proc/sys/net/ipv4/ip_local_port_range
       ulimit -HSn 65535
       /usr/sbin/squid -NDd1 &
cat. :
    untuk cos 52 jika set transparent dari webconfic maka squid jadi error,
    cek kembali baris pertama di squid.conf jika ada baris baru "http_access allow localhost"
    di hapus saja atau di kasih "#"


tambahan :
instal ccze (agar lognya jadi meriah)
     rpm -Uvh http://mirror.fraunhofer.de/download.fedora.redhat.com/epel/5Client/i386/ccze-0.2.1-6.el5.i386.rpm

melihat log
     tail -f /var/log/squid/access.log | ccze

agar server mau shutdown ketika ditekan tombol power pada cpu, tambahkan perintah ini :
        yum install acpid
 

matikan fungsi auto update, biar squidnya gak terupdate jadi squid 3.xx :D, dari
    webconfig - system - software update - edit - disable

sumber :  http://beldin-best.blogspot.com/2012/12/squid-27-stable9-di-clearos-63.html

Rabu, 06 Maret 2013

Gnome/KDE (Firefox + Flashplayer) For ClearOS 6.3

 Overview

The ClearOS Desktop environment is useful for certain application which require the X interface. For simplicity and security, ClearOS comes with only the graphical console for enough Webconfig components to enable remote administration through a web browser.

Preparation

You will need to exit the graphical console in order to use the graphical desktop. This is because only one or the other can occupy the graphical space at a time. To exit the graphical console type the following keystroke combination, CTRL + ALT + BACKSPACE
This will place you at a black screen with a red box asking for root's credentials. If you log in here, you can relaunch the graphical console. For now, stay at the red screen.
Switch to command line by pressing CTRL+ALT+F2. Log in as root.

Installing KDE

To install the graphical desktop from command line please run the following:

yum --enablerepo=clearos-core install gnome-terminal kdemultimedia

There will be quite a few packages that get downloaded and installed.

Launching the Graphical Desktop

From the console and with the Graphical Console logged out, start the graphical desktop by typing:

reboot 
startx
 
Install Firefox
 
yum install http://mirror.centos.org/centos/5/updates/i386/RPMS/firefox-10.0.12-1.el5.centos.i386.rpm

Install Adobe YUM Repository RPM package

## Adobe Repository 32-bit x86 ## 
 rpm -ivh http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm 
 rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-adobe-linux
 
## Adobe Repository 64-bit x86_64 ## 
 rpm -ivh http://linuxdownload.adobe.com/adobe-release/adobe-release-x86_64-1.0-1.noarch.rpm 
 rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-adobe-linux
 
 Install Adobe Flash Player 11.2 on Fedora 18/17/16/15/14/13/12, CentOS 6.3/6.2/6.1/6 and Red Hat (RHEL) 6.3/6.2/6.1/6 Fedora 18/17/16/15/14/13/12, CentOS 6 and Red Hat (RHEL) 6 32-bit and 64-bit version
yum install flash-plugin nspluginwrapper alsa-plugins-pulseaudio libcurl

Install Adobe Flash Player 11.2 on CentOS 5.8 and Red Hat (RHEL) 5.8 CentOS and Red Hat 32-bit and 64-bit version

yum groupinstall "Sound and Video"
 
yum install flash-plugin nspluginwrapper curl

  

Selasa, 19 Februari 2013

Install webmin 1.620-1 on ClearOS 6.3 via YUM

Install webmin 1.620-1 on ClearOS 6.3 via YUM


1. Add webmin repository to YUM. Open Terminal and create a new repo entry.
nano /etc/yum.repos.d/webmin.repo
Then enter the following lines
[Webmin]
name=Webmin Distribution Neutral
baseurl=http://download.webmin.com/download/yum
mirrorlist=http://download.webmin.com/download/yum/mirrorlist
enabled=1

Add and install GPG key using command
wget http://www.webmin.com/jcameron-key.asc

rpm --import http://www.webmin.com/jcameron-key.asc
2. Install webmin using command
yum install webmin
3. When finished, you can access your webmin page using Firefox with the following address:
https://yourserveripaddress:10000
Change yourserveripaddress with your actual server IP address.

Jumat, 15 Februari 2013

Mount Manual Harddisk / USB Flash Drive di Linux

Biasanya pengguna linux desktop akan kebingungan cara menemukan dimana letak file mereka apabila melalui console. Bagi yang belum tau cara mount flashdisk atau harddisk secara manual bisa mengikuti cara dibawah ini.
1. Lihat daftar disk yang ada di komputer anda
fdisk -l
Hasilnya akan seperti yang dibawah ini
Disk identifier: 0x00013cce
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    87889919    43943936   83  Linux
/dev/sda2        87891966   312498175   112303105    5  Extended
/dev/sda5       302735360   312498175     4881408   82  Linux swap / Solaris
/dev/sda6        87891968   302735359   107421696   8e  Linux LVM
/dev/sdb1   *          63    15647309     7823623+   b  W95 FAT32
Pada hasil diatas flashdisk berada di /dev/sdb1, bisa terlihat dari ukurannya.
2. Buat sebuah folder, terserah dimana, dalam contoh ini saya membuat di /mnt/flashdisk
mkdir /mnt/flashdisk
3. Mount menggunakan perintah dibawah ini
mount -t vfat /dev/sdb1 /mnt/flashdisk
4. Silahkan cek ke /mnt/flashdisk
cd /mnt/flashdisk
Semoga Bermanfaat :)

Kamis, 14 Februari 2013

Installing NFS on ClearOS 6.3

This is a how to install the NFS service on a Linux CentOS 6.2 box and making it accessible to others. The scenario is the following:
  • Grant read-only access to the /home/public directory to all networks
  • Grant read/write access to the /home/common directory to all networks 
At the end of this guide you will get:
  • A running NFS server with various LAN shared directories
  • A active set of firewall rules allowing the access to NFS ports
  • A permanently mounted NFS shared on a CentOS / Ubuntu client     
I assume you already have:

  • a fresh running Linux CentOS 6.2 server 
  • a sudoer user, named bozz on this guide
  • an accessible RPM repository / mirror
  • a Linux client with CentOS / Ubuntu

Steps

  1. Login as bozz user on the server
  2. Check if rpcbind is installed:
  3. $ rpm -q rpcbind
    rpcbind-0.2.0-8.el6.x86_64
    
    if not, install it:
    $ sudo yum install rpcbind
    
  4. Install NFS-related packages:
  5. $ sudo yum install nfs-utils nfs-utils-lib
    
  6. Once installed, configure the nfs, nfslock and rpcbind to run as daemons:
  7. $ sudo chkconfig --level 35 nfs on
    $ sudo chkconfig --level 35 nfslock on 
    $ sudo chkconfig --level 35 rpcbind on
    
    then start the rpcbind and nfs daemons:
    $ sudo service rpcbind start
    $ sudo service nfslock start 
    $ sudo service nfs start 
    
    NFS daemons
    • rpcbind: (portmap in older versions of Linux) the primary daemon upon which all the others rely, rpcbind manages connections for applications that use the RPC specification. By default, rpcbind listens to TCP port 111 on which an initial connection is made. This is then used to negotiate a range of TCP ports, usually above port 1024, to be used for subsequent data transfers. You need to run rpcbind on both the NFS server and client. 
    • nfs: starts the RPC processes needed to serve shared NFS file systems. The nfs daemon needs to be run on the NFS server only. 
    • nfslock: Used to allow NFS clients to lock files on the server via RPC processes. The nfslock daemon needs to be run on both the NFS server and client.

  8. Test whether NFS is running correctly with the rpcinfo command. You should get a listing of running RPC programs that must include mountd, portmapper, nfs, and nlockmgr:

  9. $ rpcinfo -p localhost
       program vers proto   port  service
        100000    4   tcp    111  portmapper
        100000    3   tcp    111  portmapper
        100000    2   tcp    111  portmapper
        100000    4   udp    111  portmapper
        100000    3   udp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  40481  status
        100024    1   tcp  49796  status
        100011    1   udp    875  rquotad
        100011    2   udp    875  rquotad
        100011    1   tcp    875  rquotad
        100011    2   tcp    875  rquotad
        100003    2   tcp   2049  nfs
        100003    3   tcp   2049  nfs
        100003    4   tcp   2049  nfs
        100227    2   tcp   2049  nfs_acl
        100227    3   tcp   2049  nfs_acl
        100003    2   udp   2049  nfs
        100003    3   udp   2049  nfs
        100003    4   udp   2049  nfs
        100227    2   udp   2049  nfs_acl
        100227    3   udp   2049  nfs_acl
        100021    1   udp  32769  nlockmgr
        100021    3   udp  32769  nlockmgr
        100021    4   udp  32769  nlockmgr
        100021    1   tcp  32803  nlockmgr
        100021    3   tcp  32803  nlockmgr
        100021    4   tcp  32803  nlockmgr
        100005    1   udp    892  mountd
        100005    1   tcp    892  mountd
        100005    2   udp    892  mountd
        100005    2   tcp    892  mountd
        100005    3   udp    892  mountd
        100005    3   tcp    892  mountd
    

  10. The /etc/exports file is the main NFS configuration file, and it consists of two columns. The first column lists the directories you want to make available to the network. The second column has two parts. The first part lists the networks or DNS domains that can get access to the directory, and the second part lists NFS options in brackets. Edit /etc/exports and append the desired shares:
  11. $ sudo nano /etc/exports
    
    then append:
    /home/public *(ro,sync,all_squash)
    /home/common *(rw,sync,all_squash)
    
    • /home/public: directory to share  with read-only access to all networks
    • /home/common: directory to share with read/write access to all networks
    • *: allow access from all networks
    • ro: read-only access
    • rw: read/write access 
    • sync: synchronous access 
    • root_squash: prevents root users connected remotely from having root privileges and assigns them the user ID for the user nfsnobody. This effectively "squashes" the power of the remote root user to the lowest local user, preventing unauthorized alteration of files on the remote server. Alternatively, the no_root_squash option turns off root squashing. To squash every remote user, including root, use the all_squash option. To specify the user and group IDs to use with remote users from a particular host, use the anonuid and anongid options, respectively. In this case, a special user account can be created for remote NFS users to share and specify (anonuid=,anongid=), where is the user ID number and is the group ID number.

  12. Create the directories to be published with the correct permissions:
  13. $ sudo mkdir -p /home/public
    $ sudo chown nfsnobody:nfsnobody /home/public
    $ sudo mkdir -p /home/common
    $ sudo chown nfsnobody:nfsnobody /home/common
    
    it should end like this:
    $ ls -l /home/
    ...
    drwxr-xr-x. 2 nfsnobody nfsnobody  4096 Feb 20 12:55 common
    drwxr-xr-x. 7 nfsnobody nfsnobody  4096 Feb 17 14:44 public
    
  14. [OPTIONAL] Allow bozz user to locally write on the created directories by appending it  to nfsnobody group and granting write permissions to the group:
  15. $ sudo usermod -a -G nfsnobody bozz
    $ sudo chmod g+w /home/public
    $ sudo chmod g+w /home/common
    
    it should end like this:
    $ ls -l /home/
    ...
    drwxrwxr-x. 2 nfsnobody nfsnobody  4096 Feb 20 12:40 common
    drwxrwxr-x. 7 nfsnobody nfsnobody  4096 Feb 17 14:44 public
    
  16. Security issues. To allow remote access some firewall rules and other NFS settings must be changed. You need to open the following ports:
    • TCP/UDP 111 - RPC 4.0 portmapper
    • TCP/UDP 2049 - NFSD (nfs server)
    • Portmap static ports, Various TCP/UDP ports defined in /etc/sysconfig/nfs file.
    the portmapper assigns each NFS service to a port dynamically at service startup time, but dynamic ports cannot be protected by iptables. First, you need to configure NFS services to use fixed ports. Edit /etc/sysconfig/nfs, enter:
    $ sudo nano /etc/sysconfig/nfs
    
    and set:
    LOCKD_TCPPORT=32803
    LOCKD_UDPPORT=32769
    MOUNTD_PORT=892
    RQUOTAD_PORT=875
    STATD_PORT=662
    STATD_OUTGOING_PORT=2020
    
    then restart nfs daemons:
    $ sudo service rpcbind restart
    $ sudo service nfs restart
    
    update iptables rules by editing /etc/sysconfig/iptables, enter:
    $ sudo nano /etc/sysconfig/iptables
    and append the following rules:
    -A INPUT -s 0.0.0.0/0 -m state --state NEW -p udp --dport 111 -j ACCEPT
    -A INPUT -s 0.0.0.0/0 -m state --state NEW -p tcp --dport 111 -j ACCEPT
    -A INPUT -s 0.0.0.0/0 -m state --state NEW -p tcp --dport 2049 -j ACCEPT
    -A INPUT -s 0.0.0.0/0  -m state --state NEW -p tcp --dport 32803 -j ACCEPT
    -A INPUT -s 0.0.0.0/0  -m state --state NEW -p udp --dport 32769 -j ACCEPT
    -A INPUT -s 0.0.0.0/0  -m state --state NEW -p tcp --dport 892 -j ACCEPT
    -A INPUT -s 0.0.0.0/0  -m state --state NEW -p udp --dport 892 -j ACCEPT
    -A INPUT -s 0.0.0.0/0  -m state --state NEW -p tcp --dport 875 -j ACCEPT
    -A INPUT -s 0.0.0.0/0  -m state --state NEW -p udp --dport 875 -j ACCEPT
    -A INPUT -s 0.0.0.0/0  -m state --state NEW -p tcp --dport 662 -j ACCEPT
    -A INPUT -s 0.0.0.0/0 -m state --state NEW -p udp --dport 662 -j ACCEPT
    
    restart iptables daemon:
    $ sudo service iptables restart
    
  17. Mount NFS shared directories: Install client NFS packages first:
  18.   on Ubuntu client:
    $ sudo apt-get install nfs-common
    
    on CentOS client:
    $ sudo yum install nfs-utils nfs-utils-lib
    
    inquiry for the list of all shared directories:
    $ showmount -e SERVERADDRESS
    
    mount server's /home/public on client's /public:
    $ sudo mkdir -p /public
    $ sudo mount SERVERADDRESS:/home/public /public
    $ df -h
    
    mount server's /home/common on client's /common:
    $ sudo mkdir -p /common
    $ sudo mount SERVERADDRESS:/home/common /common
    $ df -h
    
  19. Mount NFS automatically after reboot on the client. Edit /etc/fstab, enter:
  20. $ sudo nano /etc/fstab
    
    append the following line:
    #Directory                   Mount Point    Type   Options       Dump   FSCK
    SERVER_IP_ADDRESS:/home/public /public nfs hard 0 0
    SERVER_IP_ADDRESS:/home/common /common nfs hard 0 0
    
    to test the correctness of /etc/fstab before restarting, you can try to manually mount /public and /common:
    $ sudo mount /public
    $ sudo mount /common

References

Rabu, 13 Februari 2013

Mount File ISO di linux


Kita dapat me-mount file ISO di Linux ke dalam sebuah folder hanya dengan satu baris perintah saja.

Perintah yang digunakan untuk me-mount file ISO adalah:
mkdir /mnt/nama-folder> folder tempat kita untuk mount

sudo mount -o loop nama-file.iso /mnt/nama-folder  
Jika kalian memperhatikan gambar di atas, saya melakukan mount file "codeart-1.0-i386.iso" ke dalam folder "iso-mount". Jadi, saya menggunakan perintah:
sudo mount -o loop codeart-1.0-i386.iso iso-mount  
Cukup mudah bukan? Nah, untuk me-unmount file ISO tadi, cukup ketik perintah berikut:
sudo umount nama-folder  
Karena dalam contoh ini saya menggunakan folder "iso-mount", maka perintah yang digunakan adalah:
sudo umount iso-mount  
Saya rasa perintah-perintah di atas cukup mudah untuk dilakukan, namun jika kalian lebih suka menggunakan aplikasi berbasis GUI (Graphical User Interface) alias tinggal klik langsung jalan, silakan gunakan Mounty.

Semoga bermanfaat.

FOG v0.32 on ClearOS v5.2

Folks,

Not sure if anyone has posted this before but here's a quick HOWTO on getting FOG running on a ClearOS 5.2 install.
Not claiming this is 100% as I worked on it over a few days and may have forgotten something. I also suspect that some of the steps I do are actually not required but I have a running FOG server at the end of it so hopefully this will be helpful to other folks trying to do this. FOG v0.32 has a higher php requirement than the stock php on ClearOS so we have to upgrade that too here to get it all working.


Quick cheat sheet on getting FOG v0.32 running on a ClearOS 5.2 Install

Do a base OS install of ClearOS and put as much storage as you can at /images

Once you get the base install done register the box and then install the following additional software if it is not already installed and enable it.

As root run each of these commands on their own and say yes to the obvious questions.
Code:


yum update
yum install xinetd
yum install nfs
chkconfig portmap on
chkconfig nfs on
chkconfig xinetd on
service xinetd start
service portmap start
service nfs start



Note that FOG is going to install vsftpd so make sure internal FTP is turned off or better still not installed.
At some point I was able to get the builtin FTP working with FOG but that was on an older version. It would be my suggestion to run a dedicated ClearOS box for FOG and let it use the vsftpd it wants to. If you are trying to run FOG on a ClearOS box that already has FTP in use for something else then I would suggest disabling vsftpd and looking at making the "fog" user ftp login point at /images as this seems to be the key for making everything jive.

Add a local system account for fog:
Code:


useradd -r fog



Set the password for this user to "password"

Code:


passwd fog 



Download the 0.32 version of FOG.
Code:


http://sourceforge.net/projects/freeghost/files/



Before you run the installer you need to make a quick change or two.
Code:


nano fog_0.32/lib/redhat/config.sh



Remove from packages=
php-gettext
Change:
clamav-update to simply: clamav

Then run fog installer.
Code:


fog_0.32/bin/installfog.sh



All being well you can answer the question based upon your specific setup and it will complete without issue.

Fix TFTP write errors (read was OK)
Code:


chown -R fog.nobody /tftpboot
chmod -R 775 /tftpboot



If you use dnsmasq (clearos ) for DHCP then do the following:
Code:


nano /etc/dnsmasq.conf: 
    dhcp-boot=pxelinux.0,,<your.server.ip.address>
nano /etc/dnsmasq/dhcp.conf
     dhcp-option=eth1,66,"<your.server.ip.address>"



If you use some other DHCP on your network:
On a Linux DHCP server you must set:
next-server
On a Windows/Novell DHCP server you must set:
option 066 & 067
To IP of FOG Server

To fix issues with fog 0.32 and php (being unable to submit a task for FOG to process) - you need to upgrade into the 5.3.x tree.
I am using TimB's excellent repo for php upgrade (thanks as always Tim) and blantantly stealing the info from other posts here to upgrade php on this install.
Code:


        php -v
rpm --import ftp://timburgess.net/RPM-GPG-KEY-TimB.txt
wget ftp://timburgess.net/repo/clearos/5.2/os/timb-release-1-0.noarch.rpm
rpm -Kv timb-release-1-0.noarch.rpm
rpm -Uvh timb-release-1-0.noarch.rpm
yum --enablrepo=timb-testing upgrade php
service httpd restart
php -v



You are going to want to change the default password from "password" I am sure 
Login to the fog management interface 
[code[
http://<your server ip>/fog
[/code]

Click on the Users Icon (to heads icon second on left)
Edit the "fog" user and change the password*.
* Note that you are limited to what chars you can use so keep it fairly easy but secure. Avoid special chars.

Then click on the "Storage Management" icon > All Storage Nodes > DefaultMember
Then add the same password used above to Management Password:

Click on the Other Information icon (blue question mark)
Click FOG Settings
Scroll down to: TFTP Server 
Set the same password in FOG_TFTP_FTP_PASSWORD
It's a good idea to review the settings in here for any glaringly obvious issues.

Lastly change the fog system user password (this is new for fog 0.32 I believe)
Login to the ClearOS box via SSH on on the terminal as root and run:
Code:


passwd fog
<your new password>



This should now leave you with a working FOG install on ClearOS.
Go ahead and set a node to boot from network, do a quick register, create a blank image and associate with the host you just registered then create a quick task to upload an image on that node. When the node you registered reboots it should boot into Fog and start an image upload. 

I probably missed something so if you get stuck post a message here and I'll see if I can help out. Fog can be a pain to get going but is worth it's weight in gold in larger networks and as always, if you can run it on ClearOS it's going to be stable and fast 

Hope that helps some folks out.

Jim

Jumat, 08 Februari 2013

PXE on ClearOS

ClearOs dapat dijadikan server PXE dan TFTP untuk digunakan sebagai media Booting pada PC yang tidak menggunakan CDROM atau DVDROM.

TFTP dan PXE dapat membantu PC Lawas kita menjalankan Thin Client Baik Linux maupun Windows dengan Remote Desktop (RDP System).

Pertama kali kita harus merubah settingan DHCP server pada ClearOS Kita dan menambahkan IP server Kita pada TFTP Server 



Konfigurační stránka pro DHCP uvnitř  ClearOS 5

Konfigurasi untuk DHCP dan TFTP dalam ClearOS 

sebagai tambahan bisa diinstallkan :

 yum install syslinux
 yum install tftp-server (optional)

Pengaturan alamat IP server TFTP dari OS kita - dalam hal ini 10.0.0.138 Hal kedua adalah koneksi melalui SSH (account dan password adalah sama seperti dalam administrasi web.)

Di terminal yang dihasilkan melakukan:
Editor vi untuk pemula. Untuk memasukkan dan menggunakan cara pintas, tekan ESC untuk keluar dan menyimpan: x


echo "#Oprava pro PXE
dhcp-boot=pxelinux.0
enable-tftp
tftp-root=/opt/tftpboot" >> /etc/dnsmasq.conf
service dnsmasq restart


mkdir /opt/tftpboot


ln -s /opt/tftpboot /tftpboot
mkdir -p /opt/tftpboot/images/centos/6/x86_64/
cd /tftpboot/images/centos/6/x86_64/
wget http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/x86_64/images/pxeboot/initrd.img http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/x86_64/images/pxeboot/vmlinuz
mkdir -p /opt/tftpboot/images/centos/6/i386/
cd /tftpboot/images/centos/6/i386/
wget http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/i386/images/pxeboot/initrd.img http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/i386/images/pxeboot/vmlinuz
cp /usr/share/syslinux/pxelinux.0 /tftpboot
cp /usr/share/syslinux/menu.c32 /tftpboot
cp /usr/share/syslinux/memdisk /tftpboot
cp /usr/share/syslinux/mboot.c32 /tftpboot
cp /usr/share/syslinux/chain.c32 /tftpboot
mkdir /tftpboot/pxelinux.cfg


melakukan instalasi di / opt / karena bukan merupakan bagian standar dari sistem ClearOS, tetapi link simbolik adalah sangat berguna.
Sekarang, kita akan  membuat menu start dan kemudian menunjukkan kepada Anda bagaimana untuk menambahkan item yang bisa anda buat sendiri.


vi /tftpboot/pxelinux.cfg/default

default menu.c32
prompt 0
timeout 300
ONTIMEOUT local
MENU TITLE Hlavni nabidka

LABEL local
MENU LABEL Start z lokalniho disku
LOCALBOOT 0

LABEL CentOS
MENU LABEL CentOS moznosti
KERNEL menu.c32
APPEND pxelinux.cfg/centos




vi /tftpboot/pxelinux.cfg/centos

MENU TITLE CentOS moznosti LABEL Hlavni nabidka
MENU LABEL Navrat do hlavni nabidky
KERNEL menu.c32
APPEND pxelinux.cfg/default

LABEL CentOS 6 64bit FTP Silicon Hill
MENU LABEL CentOS 6 64bit FTP Silicon Hill
KERNEL images/centos/6/x86_64/vmlinuz
APPEND initrd=images/centos/6/x86_64/initrd.img ip=dhcp lang=cs_CZ keymap=cz-lat2 repo=http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/x86_64

LABEL CentOS 6 64bit FTP Silicon Hill Zachrany Rezim
MENU LABEL CentOS 6 64bit FTP Silicon Hill Zachrany Rezim
KERNEL images/centos/6/x86_64/vmlinuz
APPEND rescue initrd=images/centos/6/x86_64/initrd.img ip=dhcp lang=cs_CZ keymap=cz-lat2 repo=http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/x86_64

LABEL CentOS 6 32bit FTP Silicon Hill
MENU LABEL CentOS 6 32bit FTP Silicon Hill
KERNEL images/centos/6/i386/vmlinuz
APPEND initrd=images/centos/6/i386/initrd.img ip=dhcp lang=cs_CZ keymap=cz-lat2 repo=http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/i386

LABEL CentOS 6 32bit FTP Silicon Hill Zachrany Rezim
MENU LABEL CentOS 6 32bit FTP Silicon Hill Zachrany Rezim
KERNEL images/centos/6/i386/vmlinuz
APPEND rescue initrd=images/centos/6/i386/initrd.img ip=dhcp lang=cs_CZ keymap=cz-lat2 repo=http://ftp.sh.cvut.cz/MIRRORS/centos/6/os/i386

 




Sabtu, 02 Februari 2013

Extend LVM Disk Space With New Hard Disk


This is a step-by-step guide used to extend logical volume group disk space, that’s configured under LVM version 1.x of Redhat Enterprise Linux AS 3. Although, this guide has also been used to extend LVM disk space with a new SCSI hard disk, that’s configured with LVM version 2.x in Debian Sarge 3.1.

So, it’s good enough to serve as a reference for Linux users, who plan to extend LVM disk space in Linux distributions other than Redhat and Debian Linux.

Although it’s not necessary, it’s advised to perform full file system backup before carry out this exercise!

The most risky step is to resize file system that resides in a LVM logical volume. Make sure the right file system resizer tool is used. If you’re using resize2fs to resize a Reiserfs file system, I guess you’ll know how bad will be the consequences.

Apparently, you’ll need resize_reiserfs to resize a Reiserfs file system, which is part of the reiserfsprogs package.

Steps to extend /home file system that mounts on logical volume /dev/vg0/lvol1 of volume group vg0, by using a new 36GB SCSI hard disk added to RAID 0 of HP Smart Array 5i Controller.

1) Log in as root user and type init 0 to shutdown Redhat Enterprise AS 3 Linux.

2) Add in the new 36GB SCSI hard disk. Since HP Smart Array 5i is configure for RAID 0, it’s fine to mix hard disks of different capacity, except that hard disk speed must be the same! A mix of 10K and 15K RPM hard disks might cause Redhat Enterprise Linux fails to boot up properly.

Normally, HP Smart Array 5i Controller will automatically configure new hard disk as a logical drive for RAID 0. If not, press F8 on boot up to get in HP Smart Array 5i Controller setup screen and manually create logical drive as part of RAID 0.
How to tell if new hard disk is not configured as logical drive for RAID 0?

Physically, the hard disk green light should be on or blinking to indicate that it’s online to RAID system.

From OS level, 3rd hard disk in RAID 0 of HP Smart Array 5i Controller is denoted as /dev/cciss/c0d2. So, type

fdisk /dev/cciss/c0d2

at root command prompt. If an error message Unable to open /dev/cciss/c0d2 or alike is returned, it means that new hard disk is not online to RAID system or Redhat Linux.

3) Boot up Redhat Enterprise Linux into multi-user mode and confirm it’s working properly. This step is not necessary, but it’s a good practice to prove that the server is working fine after each change has been made, be it a major or minor change.

4) Type init 1 at root command prompt to boot into single user mode. Whenever possible, boot into single user mode for system maintenance as to avoid inconsistency or corruption.

5) At the root command prompt, type

fdisk /dev/cciss/c0d2

to create partition for the 3rd SCSI hard disk added to RAID 0. Each hard disk needs at least one partition (maximum 4 primary partitions per hard disk) in order to use the new hard disk in a Linux system.

6) While at the fdisk command prompt, type m to view fdisk command options.

7) Type n to add a new partition, followed by p to go for primary partition type.

8) Type 1 to create the first partition. Press ENTER to accept first cylinder default as 1, and press ENTER again to accept the default value for last cylinder, which is essentially create single partition that use up all hard disk space.

9) Type t to change the partition system id, or partition type. As there is only one partition, partition 1 is automatically selected for action. Type L to list all supported partition type. As shown in partition type listing, type 8e to set partition 1 as Linux LVM partition type.

10) Type p to confirm partition /dev/cciss/c0d2p1 has been created in partition table. Type w to write the unsaved partition table of changes to hard disk and exit from fdisk command line.

11) Type df -hTa to confirm /home file system type, that’s mounts on logical volume /dev/vg0/lvol1. For this case, it’s an ext3 file system type.

12) Type umount /home to un-mount /home file system from Redhat Enterprise Linux.

13) Next, type LVM command

pvcreate /dev/cciss/c0d2p1

to create a new LVM physical volume on the new partition /dev/cciss/c0d2p1.

14) Now, type another LVM command

vgextend vg0 /dev/cciss/c0d2p1

to extend LVM volume group vg0, with that new LVM physical volume created on partition /dev/cciss/c0d2p1.

15) Type pvscan to display physical volumes created in Linux LVM system, which is useful to answer questions such as “How many physical volume created in volume group vg0?”, “How much of free disk space left on each physical volume?”, “How do I know which physical volume should be used for a logical volume?” “Which physical volume has free disk space for used with a logical volume?”, etc.

Sample output of pvscan command:

ACTIVE PV “/dev/cciss/c0d0p4″ of VG “vg0″ [274.27GB / 0 free]
ACTIVE PV “/dev/cciss/c0d1p1″ of VG “vg0″ [33.89GB / 0 free]
ACTIVE PV “/dev/cciss/c0d2p1″ of VG “vg0″ [33.89 GB / 33.89 GB free]
total: 3 [342.05 GB] / in use: 3 [342.05 GB] / in no VG: 0 [0]

Alternative, type vgdisplay vg0 | grep PE to confirm that new physical volume has been added to volume group vg0. Take note of Free PE / Size, 35GB in this case, that’s free disk space added by new physical volume in volume group vg0.

16) Execute LVM command

lvextend -L +33G /dev/vg0/lvol1 /dev/cciss/c0d2p1

to extend the size of logical volume /dev/vg0/lvol1 of volume group vg0 by 33GB on physical volume /dev/cciss/c0d2p1.

17) Now, the most risky steps to start. Type this command

e2fsck -f /dev/vg0/lvol1

to force ext3 file system check on /dev/vg0/lvol1. It’s a must to confirm file system is in good state, before implement any changes on it.

CAUTION – Utility e2fsck is only used to check EXT file system such as ext2 and ext3, and not other file system such Reiserfs file system!

Once the ext file system check completes without errors or warnings, type command

resize2fs /dev/vg0/lvol1

to resize EXT3 file system of /home, that mounts on logical volume /dev/vg0/lvol1, until it takes up all free disk space added to /dev/vg0/lvol1.

CAUTION – Utility resize2fs is only used to resize EXT file system such as ext2 and ext3, and not other file systems such as Reiserfs file system!
Both e2fsck and resize2fs utilities are part of e2fsprogs package. And both utilities takes some minutes to complete, depends on the size of target file system.

If everything alright, type mount /home to re-mount /home file system. Next, type df -h to confirm that /home file system has been extended successfully.

sumber : http://www.walkernews.net/2007/02/27/extend-lvm-disk-space-with-new-hard-disk/

Linux LVM In 3 Minutes

What’s LVM? Why using Linux Logical Volume Manager or LVM? Well, these questions are not the scope here. But in brief, the most attractive feature of Logical Volume Manager is to make disk management easier in Linux!

Basically, LVM allows users to dynamically extend or shrink Linux “partition” or file system in online mode! The LVM can resize volume groups (VG) online by adding new physical volumes (PV) or rejecting those existing PVs attached to VG.


In this 3-minutes Linux LVM guide, let’s assume that

  • The LVM is not currently configured or in used. Having say that, this is the LVM tutorial if you’re going to setup LVM from the ground up on a production Linux server with a new SATA / SCSI hard disk.
     
  • Without a luxury server hardware, I tested this LVM tutorial on PC with the secondary hard disk dedicated for LVM setup. So, the Linux dev file of secondary IDE hard disk will be /dev/hdb (or /dev/sdb for SCSI hard disk).
     
  • This guide is fully tested in Red Hat Enterprise Linux 4 with Logical Volume Manager 2 (LVM2) run-time environment (LVM version 2.00.31 2004-12-12, Library version 1.00.19-ioctl 2004-07-03, Driver version 4.1.0)!

How to setup Linux LVM in 3 minutes at command line?
  1. Login with root user ID and try to avoid using sudo command for simplicity reason.
     
  2. Using the whole secondary hard disk for LVM partition:
    fdisk /dev/hdb

    At the Linux fdisk command prompt,
    1. press n to create a new disk partition,
    2. press p to create a primary disk partition,
    3. press 1 to denote it as 1st disk partition,
    4. press ENTER twice to accept the default of 1st and last cylinder – to convert the whole secondary hard disk to a single disk partition,
    5. press t (will automatically select the only partition – partition 1) to change the default Linux partition type (0×83) to LVM partition type (0x8e),
    6. press L to list all the currently supported partition type,
    7. press 8e (as per the L listing) to change partition 1 to 8e, i.e. Linux LVM partition type,
    8. press p to display the secondary hard disk partition setup. Please take note that the first partition is denoted as /dev/hdb1 in Linux,
    9. press w to write the partition table and exit fdisk upon completion.

     
  3. Next, this LVM command will create a LVM physical volume (PV) on a regular hard disk or partition:
    pvcreate /dev/hdb1
     
  4. Now, another LVM command to create a LVM volume group (VG) called vg0 with a physical extent size (PE size) of 16MB:
    vgcreate -s 16M vg0 /dev/hdb1

    Be properly planning ahead of PE size before creating a volume group with vgcreate -s option!
     
  5. Create a 400MB logical volume (LV) called lvol0 on volume group vg0:
    lvcreate -L 400M -n lvol0 vg0

    This lvcreate command will create a softlink /dev/vg0/lvol0 point to a correspondence block device file called /dev/mapper/vg0-lvol0.
     
  6. The Linux LVM setup is almost done. Now is the time to format logical volume lvol0 to create a Red Hat Linux supported file system, i.e. EXT3 file system, with 1% reserved block count:
    mkfs -t ext3 -m 1 -v /dev/vg0/lvol0
     
  7. Create a mount point before mounting the new EXT3 file system:
    mkdir /mnt/vfs
     
  8. The last step of this LVM tutorial – mount the new EXT3 file system created on logical volume lvol0 of LVM to /mnt/vfs mount point:
    mount -t ext3 /dev/vg0/lvol0 /mnt/vfs

To confirm the LVM setup has been completed successfully, the df -h command should display these similar message:

/dev/mapper/vg0-lvol0 388M 11M 374M 3% /mnt/vfs

Some of the useful LVM commands reference:
vgdisplay vg0
 
To check or display volume group setting, such as physical size (PE Size), volume group name (VG name), maximum logical volumes (Max LV), maximum physical volume (Max PV), etc.
pvscan
 
To check or list all physical volumes (PV) created for volume group (VG) in the current system.
vgextend
 
To dynamically adding more physical volume (PV), i.e. through new hard disk or disk partition, to an existing volume group (VG) in online mode. You’ll have to manually execute vgextend after pvcreate command that create LVM physical volume (PV).
sumber :  https://www.walkernews.net/2007/07/02/how-to-create-linux-lvm-in-3-minutes/

Jumat, 01 Februari 2013

LVM (Logical Volume Manager)

Berkenalan dengan Tipe Partisi LVM (Logical Volume Manager)

Bagi anda yang sudah lama berkenalan dengan linux tentunya tidak asing lagi dengan yang namanya LVM (Logical Volume Management). Apa itu  Logical Volume Management (LVM) dan kenapa Operating System Linux menggunakan filesystem tipe ini? Mari kita berkenalan dengan yang namanya LVM.
LVM

Logical Volume Management

LVM


Apa itu LVM

Logical Volume Management (LVM) adalah pilihan manajemen disk yang hampir setiap distro Linux sertakan. Apakah anda perlu membuat media penyimpanan dalam jumlah besar atau membuat partisi yang dinamis, LVM mungkin akan menjadi solusi untuk anda.

Logical Volume Manager memungkinkan untuk membuat layer antara sistem operasi dan disk /partisi yang digunakannya. Dalam manajemen disk tradisional sistem operasi anda akan mencari disk apa yang tersedia (/dev /sda, /dev /sdb, dll) dan kemudian melihat apa partisi yang tersedia pada disk (/dev/sda1, /dev/sda2, dll ).

Dengan LVM, disk dan partisi dapat dibuat menjadi satu buah Logical Volume yang terdiri dari beberapa disk dan atau partisi. OS tidak akan tahu & tidak akan terpengaruh sama sekali karena LVM hanya memberitahukan volume group (disk) dan logical volume (partisi) yang telah kita buat.

Karena volume group dan logical volumes tidak secara fisik terhubung ke hard drive, akan mudah bagi kita untuk mengubah ukuran partisi/disk secara dinamis dan menciptakan disk dan partisi baru. Selain itu, LVM dapat memberikan Anda fitur yang sistem file tradisional tidak mampu melakukan. Sebagai contoh, ext3 tidak memiliki dukungan untuk live snapshot, tetapi jika Anda menggunakan LVM Anda memiliki kemampuan untuk mengambil snapshot dari logical volume Anda tanpa perlu unmount disk.

Kapan kita menggunakan LVM

Jika anda menggunakan Linux di sebuah laptop dengan hanya satu buah disk dan tidak berencana atau tidak bisa menambah kapasitas, maka anda tidak perlu memilih LVM. Tapi jika  di masa yang akan datang anda punya rencana untuk menambah kapasitas harddisk tapi malas untuk menginstall ulang OS, atau ingin menggabungkan beberapa disk yang anda miliki menjadi satu partisi. Sudah saatnya anda menggunakan LVM. Pada beberapa distro seperti fedora memilih LVM sebagai default instalasi. Distro lain memberikan opsi menggunakan LVM namun tidak menjadikan opsi default.

Penggunaannya

Seperti yang sudah saya sebutkan diatas, LVM memungkinkan anda untuk:

Memanaj disk dalam jumlah besar (banyak) yang memungkinkan anda menambah, mengganti, menyalin dan berbagi isi dari satu disk ke disk lainnya tanpa perlu mengganggu service yang sedang berjalan.
Pada aplikasi di rumahan, daripada anda pusing memikirkan install ulang OS untuk mengganti disk karena kapasitas disk yang anda miliki sudah tidak mencukupi aktivitas anda sekarang dan kebutuhan OS di masa datang, LVM memberikan kemudahan untuk mengubah ukuran partisi sesuai kebutuhan.
Membuat backup dengan fasilitas “snapshot”
Membuat satu logical volumes dari beberapa volume fisik / partisi fisik atau satu disk penuh ( mirip dengan RAID 0, tetapi lebih mirip dengan JBOD, memungkinkan merubah ukuran secara dinamis.

Fitur

LVM dapat melakukan hal berikut:
Merubah jumlah volume group secara online untuk menambah atau mengurangi jumlah fisik.
Merubah logical volumes secara online dengan menambah atau mengurangi kapasitas.
Menggabungkan keseluruhan atau sebagian dari logical volume lintas colume fisik mirip dengan RAID 0.
Membuat mirror keseluruhan atau sebagian dari logical columes mirip dengan RAID 1.
Memindahkan logical volume antar volume fisik.
Memisahkan atau menggabungkan volume group (selama tidak ada logical volume memberi jarak antar logical). Ini sangat berguna ketika memindahkan keseluruhan logical volume ke dan dari penyimpanan lain (offline).

LVM juga bisa bekerja pada media penyimpanan yang berbagi (model cluster, dengan memanfaatkan drbd yang menghubungkan antar node). Selain keseluruhan fitur dan kegunaan LVM diatas, ada keterbatasan LVM, yaitu tidak bisa melakukan redudansi seperti halnya RAID 3 sampai 6.

Semoga Bermanfaat

sumber : http://vavai.com/2011/09/22/berkenalan-dengan-lvm/

Disarikan dari berbagai sumber.

Bahan Bacaan:
1. http://en.wikipedia.org/wiki/Logical_Volume_Manager_%28Linux%29
2. http://en.wikipedia.org/wiki/Logical_volume_management
3. http://www.ibm.com/developerworks/linux/library/l-lvm/

2 HDD 1 Partisi LVM di ClearOS 6.3


Menambah 2 HDD 1 Partisi menggunakan LVM di ClearOS 6.3

Jika anda ingin menambah quota hard disk lama dengan hard disk baru tapi tetap satu partisi dengan memakai partisi yang lama maka berikut adalah cara yang tepat:

Contoh kasus : Hardisk yg dikenal adalah /dev/sdb

Lihat Seluruh partisi linux anda dengan :

# fdisk -l

Hapus seluruh partisi anda dengan

# cfdisk /dev/sdb dan jgan lupa write -> exit

Lihatlah physical volume (harddisk yang dikenali linux anda cnth : /dev/sda atau /dev/sdb dst)

# pvdisplay

buatlah physical volume terlebih dahulu untuk hard disk baru( contoh: /dev/sdb ) :

# pvcreate /dev/sdb

Lihat Volume Group untuk melihat (VG Name : vg_....) untuk penggabungan space dari ke-2 hard disk tersebut:

# vgdisplay

setelah itu penggabungan antara Volume Group di hard disk lama dengan hard disk baru:

# vgextend vg_.... /dev/sdb

Lihat quota yang telah digabungkan tersebut (LV Name : lv_....)

# lvdisplay

Jika telah berubah, lanjutkan untuk menggunakan seluruh space pada salah satu partisi yaitu “lv_....″

# lvextend -l +100%FREE vg_..../lv_....

Setelah itu resize agar terbaca size yang baru pada partisi lama

# resize2fs /dev/vg_..../lv_....

Lihat % partisi / Harddisk yg dibuat

# df -h

Rabu, 23 Januari 2013

Nginx on ClearOS 6.3

 :Install
 rpm -Uvh nginx-1.2.6-1.el6.ngx.i386.rpm

:Create directories to hold cache files
mkdir /usr/local/www
mkdir /usr/local/www/nginx_cache
mkdir /usr/local/www/nginx_cache/tmp
mkdir /usr/local/www/nginx_cache/files
chown apache /usr/local/www/nginx_cache/files/ -Rf


:create cache dir
mkdir /cache1
chown squid:squid /cache1
chmod -R 777 /cache1

:Start dir
squid -z

:Start squid
service squid start

:Restart Nginx
service nginx restart

:Chek cached videos
ls -lh /usr/local/www/nginx_cache/files

:chek cache hit
tail -f /var/log/squid/access.log | grep HIT

: Check Hit
squidclient -p 3128 mgr:info |grep Hit

:squid debag
squid -NCd10

Lusca on ClearOS 6.3

rpm -i squid-LUSCA_HEAD_r14941-1_el5.i686.rpm
or


rpm -i squid-LUSCA_HEAD_r14941.clearos.i686.rpm

Link Download File
https://www.dropbox.com/sh/yrv8l1g0ei3ij6a/GJXCbAG8EQ
https://code.google.com/p/ghebhes-low-battery/downloads/list

All of Lusca/Squid configuration files can be found at
/etc/squid/

and squid executable can be found at
/usr/local/squid/sbin/

mkdir /cache1
chown squid:squid /cache1
chmod -R  777 /cache1

Now initialize cache dir by

squid -z
chmod +x /etc/squid/storeurl.pl
squid -k parse
squid -k reco

TIP:
To start SQUID Server in Debug mode, to check any erros, use

squid -d1
squid -NCd10

service squid start/stop/restart

:check cache hit
tail -f /var/log/squid/access.log | grep HIT

:Check Hit
squidclient -p 3128 mgr:info |grep Hit


:Install CCZE untuk Warna Tampilan

rpm -Uvh http://apa-kata-dunia.googlecode.com/files/ccze-0.2.1-6.el5.i386.rpm

# tail -f /var/log/squid/access.log | ccze




Disable GUI on ClearOS6

yum remove gconsole app-graphical-console