PostgreSQL tuning for MySQL admins

Posted by Unknown Selasa, 30 Oktober 2012 0 komentar
http://www.openlogic.com/wazi/bid/234927/PostgreSQL-tuning-for-MySQL-admins


You can get optimum performance from your database by tuning your system in three areas: the hardware, the database, and the database server. Each increasingly more specialized than the last, with the tuning of the actual database server being unique to the software being used. If you're already familiar with tuning MySQL databases, you'll find tuning a PostgreSQL database server to be similar, but with some key differences to watch out for.
Before tuning your PostgreSQL database server, work on optimizing some of the key factors in the hardware and the database. All databases, of all types including PostgresSQL and MySQL, are ultimately limited by the I/O, memory, and processing capabilities of the hardware. The more a server has of each of these, the greater performance it's capable of. Using fast disks with hardware RAID is essential for a busy enterprise database server, as is having large amounts of memory. For the best results the server needs to have enough memory to cache the most commonly used tables without having to go to the disk. Under no circumstances should the server start swapping to hard disk. Similarly, the faster the CPU the better; for servers handling multiple simultaneous transactions, multicore CPUs are best.
On the software side, you can optimize both the actual database structure and frequently used queries. Be sure to create appropriate indexes. As with MySQL, primary key indexes are essential, and unique indexes offer advantages for data integrity and performance. Also, all full-text searches should have the correct indexes. Unlike MySQL, it is possible to build indexes while the database fulfills read and write requests. Look at the CONCURRENTLY option on the CREATE INDEX command, which allows the index to be built without taking any locks that prevent concurrent inserts, updates, or deletes on the table.
Even though an index has been created, PostgreSQL may not necessarily use it! PostgreSQL has a component called the planner that analyzes any given query and decides which is the best way to perform the requested operations. It decides between doing an index-based search or a sequential scan. In general, the planner does a good job of deciding which is the most effective way to resolve a query.
Let's see how this works in practice. Here is a simple table and some data:
CREATE TABLE birthdays (
id serial primary key,
firstname varchar(80),
surname varchar(80),
dob date
);

INSERT INTO birthdays (firstname, surname, dob) VALUES ('Fred', 'Smith', '1989-05-02');
INSERT INTO birthdays (firstname, surname, dob) VALUES ('John', 'Jones', '1979-03-04');
INSERT INTO birthdays (firstname, surname, dob) VALUES ('Harry', 'Hill', '1981-02-11');
INSERT INTO birthdays (firstname, surname, dob) VALUES ('Bob', 'Browne', '1959-01-21');
Use the EXPLAIN command to see what the planner will decide when executing any given query:
EXPLAIN select * from birthdays;

QUERY PLAN
--------------------------------------------------------------
Seq Scan on birthdays (cost=0.00..12.00 rows=200 width=364)
This tells us that since all the data is being requested, PostgreSQL will use a sequential scan (Seq Scan). If the query uses the primary key (id) then the planner tries a different approach:
 EXPLAIN select * from birthdays where id=2;

QUERY PLAN
--------------------------------------------------------------
Index Scan using birthdays_pkey on birthdays (cost=0.00..8.27 rows=1 width=364)
This time it favored an Index Scan. Still, just because an index exists it doesn't mean the planner will decide to use it. Doing a search for a particular date of birth will (without the index) do a sequential scan:
EXPLAIN select * from birthdays where dob='1989-05-02';

QUERY PLAN
--------------------------------------------------------------
Seq Scan on birthdays (cost=0.00..1.10 rows=1 width=364)
If you created an index using the command CREATE INDEX dob_idx ON birthdays(dob); and then ran the EXPLAIN command again, the result would be the same – a sequential scan would still be used. The planner makes this decision based on various table statistics, including the size of the dataset, all of which are not (by default) collected automatically. Without the latest stats, the planner's decisions will be less than perfect. Therefore, when you create an index or insert large amounts of new data, you should run the ANALYZE command to collect the latest statistics and improve the planner's decisionmaking.
You can force the planner to use the index (if it exists) using the SET enable_seqscan = off; command:
SET enable_seqscan = off;

EXPLAIN select * from birthdays where dob='1989-05-02';
QUERY PLAN
---------------------------------------------------------------------------
Index Scan using dob_idx on birthdays (cost=0.00..8.27 rows=1 width=364)
Turning off sequential scans might not improve performance, as index scans, for a large number of results, can be more I/O-intensive. You should test the performance differences before deciding to disable it permanently.
The EXPLAIN command can check how queries are performed and find bottlenecks on the actual database structure. It also has an ANALYZE option that performs queries and shows the actual run times. Here is the same query, but this time with the ANALYZE option:
EXPLAIN ANALYZE select * from birthdays where dob='1989-05-02';

QUERY PLAN

--------------------------------------------------------------------------

Seq Scan on birthdays (cost=0.00..1.09 rows=1 width=19)
(actual time=0.007..0.008 rows=1 loops=1)
The results now contain extra information showing the actual results returned. Unfortunatly it isn't possible to compare the "actual time" and "cost" fields, as they are measured differently, but if the rows match, or are close, it means that the planner correctly estimated the work load.
One other piece of routine maintenance that affects performance is the clearing up of unused data left behind in the database after updates and deletes. When PostgreSQL deletes a row, the actual data may still reside in the database, marked as deleted and not used by the server. This makes deleting fast, but the unused data needs to be removed at some point. Using the VACUUM command removes this old data and frees up space. The PostgreSQL documentation explains how to set up autovacuum, which automates the execution of VACUUM and ANALYZE commands.

Tweaking the PostgreSQL server parameters

The /var/lib/pgsql/data/postgresql.conf file contains the configuration parameters for the PostgreSQL server, and defines how various resources are allocated. Altering parameters in this file is similar to setting MySQL server system variables, either from the command-line options or via the MySQL configuration files. Most of the parameters are best left alone, but modifying a few key items can improve performance. However, as with all resource-based configuration, setting items to unrealistic amounts will actually degrade performance; consider yourself warned.
  • shared_buffers configures the amount of memory allocated to hold queries before they are fed into the operating system's buffers. The precise metrics of how this parameter affects performance aren't clear, but increasing it from the default of 32MB to between 6-15% of available RAM should enhance performance. For a 4GB system, a value of 512MB should be sufficient.
  • effective_cache_size tells the planner about the size of the disk cache provided by the operating system. It should be at least a quarter of the total available memory, and setting it to half of system memory is considered a normal conservative setting.
  • wal_buffers is the number of disk page buffers in shared memory for writeahead logging. Setting this to around 16MB can improve the speed of WAL writes for large transactions.
  • work_mem is the amount of working memory available during sort operations. On systems that do a lot of sorting, increasing the work_mem parameter allows PostgreSQL to use memory for sorting rather than using the disk. The parameter is per-sort, which means if a client does two sorts in a query, the specified amount of memory will be used twice. A value of, say, 10MB used by 50 clients doing two sorts each would occupy just under 1GB of system memory. Given how quickly the numbers can add up, setting this parameter too high can consume memory unnecessarily, but you can see performance gains by increasing it from the default of 1MB in certain environments.
To change a parameter, edit the conf file with a text editor, then restart the PostgreSQL server using the command service postgresql restart.
One last item to watch involves PostgreSQL's logging system, which is useful when you're trying to catch errors or during application development. However, if the logs are written to the same disk as the PostgreSQL database, the system may encounter an I/O bottleneck as the database tries to compete for bandwidth with its own logging actions. Tune the logging options accordingly and consider logging to a separate disk.
In summary, you can improve your database server's performance if you run PostgreSQL on suitable hardware, keep it routinely maintained, and create appopriate indexes. Changing some of the database server configuration variables can also boost performance, but always test your database under simulated load conditions before enabling any such changes in a production environment.

Baca Selengkapnya ....

NASA achieves data goals for Mars rover with open source software

Posted by Unknown 0 komentar
http://opensource.com/life/12/10/NASA-achieves-data-goals-Mars-rover-open-source-software


Since the landing of NASA’s rover, Curiosity, on Mars on August 11th (Earth time), I have been following the incredible wealth of images that have been flowing back. I am awestruck by the breadth and beauty of the them.
The technological challenge of Curiosity sending back enormous amounts of data has, in my opinion, not been fully appreciated. From NASA reports, we know that Curiosity was sending back 'low level resolution' data (1,200 x 1,200 pixels) until it went through a software "brain transplant" and is now providing even more detailed and modifiable data.
How is this getting done so efficiently and distributed so effectively?
One recent story highlighted the 'anytime, anywhere' availability of Curiosity’s exploration that is handling "hundreds of gigabits/second of traffic for hundreds of thousands of concurrent viewers." Indeed, as the blog post from the cloud provider, Amazon Web Services (AWS), points out: "The final architecture, co-developed and reviewed across NASA/JPL and Amazon Web Services, provided NASA with assurance that the deployment model could cost-effectively scale, perform, and deliver an incredible experience of landing on another planet. With unrelenting goals to get the data out to the public, NASA/JPL prepared to service hundreds of gigabits/second of traffic for hundreds of thousands of concurrent viewers."
This is certainly evidence of the growing role that the cloud plays in real-time, reliable availability.
But, dig beneath the hood of this story—and the diagram included—and you’ll see another story. One that points to the key role of open source software in making this phenomenal mission work and the results available to so many, so quickly.
Here’s the diagram I am referring to:
Curiosity Diagram

If you look at the technology stack, you’ll see that at each level open source is key to achieving NASA’s mission goals. Let’s look at each one:

Nginx

Nginx (pronounced engine-x) is a free, open source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. As a project, it has been around for about ten years. According to their website, Nginx now hosts nearly 12.18% (22.2M) of active sites across all domains. Nginx is generally regarded as a preeminent webserver for delivering content fast due to "its high performance, stability, rich feature set, simple configuration, and low resource consumption." Unlike traditional servers, Nginx doesn't rely on threads to handle requests. Instead it uses a much more scalable, event-driven (asynchronous) architecture. This architecture uses small, but more importantly, predictable amounts of memory under load.
Among the known high-visiblity sites powered by Nginx, according to its website, are Netflix, Hulu, Pinterest, CloudFlare, Airbnb, WordPress.com, GitHub, SoundCloud, Zynga, Eventbrite, Zappos, Media Temple, Heroku, RightScale, Engine Yard and NetDNA.

Railo CMS

Railo is an open source content management system (CMS) software which implements the general-purpose CFML server-side scripting language.  If you're familiar with Drupal, think mega-Drupal and that is Railo. It has recently been accepted as part of Jboss.org and runs on the Java Virtual Machine (JVM). It is often used to create dynamic websites, web applications and intranet systems. CFML is a dynamic language supporting multiple programming paradigms.

GlusterFS

Perhaps the most important piece of this high-demand configuration, GlusterFS is an open source, distributed file system capable of scaling to several petabytes (actually, 72 brontobites!) and handling thousands of clients. GlusterFS clusters together storage building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace. GlusterFS is based on a stackable user space design and can deliver exceptional performance for diverse workloads. It is especially adept at replicating big data across multiple platforms, allowing users to analyze the data via their own analytical tools. This technology is used to power the personalized radio service Pandora and the cloud content services company Brightcove, and by NTTPC for its cloud storage business. (In 2011, Red Hat acquired the open source software company, Gluster, which supports the GlusterFS upstream community; the product is now known as Red Hat Storage Server.)
I suspect that, like myself, the cascade of visual images from this unique exploration has sparked for yet another generation the mystery and immense challenge of life beyond our own planet. And what a change from the grainy television transmissions of the first moon landing, 43 years ago this summer (at least for those who are in a position to remember it!). Even 20 years ago, the delays, the inferior quality, and the narrow bandwidth of the data that could be analyzed stands in stark contrast to what is being delivered right now and for the next few years from this one mission.
Taken together, the combination of cloud and open source enabled the Curiosity mission to provide these results in real time, not months delayed; at high quality, not "good enough" quality. A traditional, proprietary approach would not have been this successful, given the short time to deployment and shifting requirements that necessitated the ultimate in agility and flexibility. NASA/JPL are to be commended. And while there was one cloud offering involved, "it really could have been rolled with any number of other solutions," as the story cited at the beginning of this post notes.
As policy makers and technology strategists continue their focus on 'big data', the mission of Curiosity will provide some important lessons. One key take away: open source has been key to the success of this mission and making its results as widely available as possible in so quickly a time frame.

Baca Selengkapnya ....

Review nokia N95

Posted by Unknown Kamis, 25 Oktober 2012 0 komentar
Si jadul lengkap! ...
itu yang saya rasakan ketika saya pertama kali menggunakan Nokia N95 ini, dengan harga yang kini hanya berkisar antara 500-600ribuan untuk pemasaran online dan untuk pemasaran nyata sekitar 300ribuan sudah sangat membuat para penggila symbian tertarik lagi sama si tua satu ini, belum lagi ditambah sperpatnya yang kisaran harganya dibawah 200ribuan untuk waktu saat ini.
sedikit saya bahas tentang kehebatan si tua satu ini.
Yang paling menonjol dari HP satu ini adalah dukungan kamera 5MP yang menghasilkan kualitas gambar yang cukup jernih untuk kelas HP tua yang satu ini. Ditambah lagi fitur wifi, os Symbian OS 9.2, s60 rel 3.1 yang merupakan versi symbian yang cukup tinggi untuk kelas s60. untuk lebih jelasnya ini spesifikasi N95 nya.



GENERAL2G NetworkGSM 850 / 900 / 1800 / 1900
3G NetworkHSDPA 2100
HSDPA 850 / 1900 - American version
SIMMini-SIM
Announced2006, September. Released 2007, March
StatusDiscontinued
BODYDimensions99 x 53 x 21 mm, 90 cc (3.90 x 2.09 x 0.83 in)
Weight120 g (4.23 oz)
DISPLAYTypeTFT, 16M colors
Size240 x 320 pixels, 2.6 inches, 40 x 53 mm (~154 ppi pixel density)
SOUNDAlert typesVibration; Downloadable polyphonic, monophonic MP3 ringtones
LoudspeakerYes, with stereo speakers
3.5mm jackYes
MEMORYCard slotmicroSD, up to 8GB, hot swap, 128 MB card included
Internal160 MB storage, 64 MB RAM
DATAGPRSClass 10 (4+1/3+2 slots), 32 - 48 kbps
EDGEClass 32, 296 kbps; DTM Class 11, 177 kbps
SpeedHSDPA
WLANWi-Fi 802.11 b/g, UPnP technology
BluetoothYes, v2.0 with A2DP
Infrared portYes
USBYes, v2.0 miniUSB
CAMERAPrimary5 MP, 2592 x 1944 pixels, Carl Zeiss optics, autofocus, LED flash
VideoYes, VGA@30fps
SecondaryQVGA videocall camera
FEATURESOSSymbian OS 9.2, S60 rel. 3.1
CPU332 MHz Dual ARM 11
GPU3D Graphics HW Accelerator
MessagingSMS, MMS, Email, Instant Messaging
BrowserWAP 2.0/xHTML, HTML
RadioStereo FM radio; Visual radio
GPSYes, with A-GPS support; Nokia Maps
JavaYes, MIDP 2.0
ColorsSilver, Plum, Black, Pink, Red
- Dual slide design
- WMV/RV/MP4/3GP video player
- MP3/WMA/WAV/RA/AAC/M4A music player
- TV-out
- Organizer
- Document viewer (Word, Excel, PowerPoint, PDF)
- Predictive text input
- Push to talk
- Voice dial/memo
BATTERYStandard battery, Li-Ion 950 mAh (BL-5F)
Stand-byUp to 220 h (2G) / 192 h (3G)
Talk timeUp to 6 h 30 min (2G) / 2 h 42 min (3G)
MISCSAR US0.79 W/kg (head)     0.76 W/kg (body)    
SAR EU0.50 W/kg (head)  

sumber gsmarena.com

Dari sini kita bisa menilai bahwa hp "murah" ini tidak bisa dipandang sebelah mata, meski usianya sudah tua, namun, kemampuannya masih terbilang muda. Terdapat 2 jenis N95 yaitu:
1. N95 2gb external m.

2 N95 8gb include.


 tidak banyak perbedaan yang terdapat dari 2 versi hp ini , namun yang menonjol adalah dari segi kapasitas ruang simpannya saja, selain itu tidak banyak.
untuk berbagai jenis modding yang dapat dilakukan terhadap hp ini, jangan ditanya lagi, dari mulai yang biasa sampai yang kayak gni ni 

nah,,,, itu tadi sekilas tentang Nokia N95.. buat kalian para Symbianers... wajib nyoba hp tua ne,... sekian.


Baca Selengkapnya ....

Features of Open Source GPS Tracking System

Posted by Unknown Selasa, 23 Oktober 2012 0 komentar
http://linuxaria.com/article/features-of-open-source-gps-tracking-system?lang=en


Over the past few years, Global Positioning Tracking System (GPS) applications have become extremely popular among automobile consumers and in fact anyone who drives a vehicle on a regular basis probably use them. So much so that many car manufacturers offer GPS capabilities built directly into their cars. Mobile device providers have also found themselves competing with each other over their location aware applications using GPS technology. While there are several applications on the market that offer functionality for individual consumers, there is not a lot available for companies or small business owners who need to manage several vehicles at once from a central location.
The Open GTS (Open GPS Tracking System) Project is an open source project developing the Open GTS application, focusing on a GPS application specifically built for managing fleets of vehicles for small businesses. Fleet vehicles have different requirements for GPS applications than individual vehicles. For instance, the dispatch manager’s ability to keep track of each vehicle’s location through the work day is just as important as the driver’s ability to find their way around with accurate real time mapping and directions.



Types of Transportation Fleets using Open Source GPS Tracking Systems

The Open GTS package is currently the only open source application that provides these small business capabilities and has been downloaded by hundreds of small business users worldwide in over one hundred and ten countries. Companies who manage their automotive fleets with Open GTS include taxi services, parcel delivery, truck and van shipping, ATV and recreational vehicle rentals, business car rentals, water based freight ships and barges, and farm vehicles.

Skinnable Web Interfaces

The convenient centralized tracking application is built to fit into any small business application environment and can be customized accordingly. Along with the ability to code new extension modules or modifications to the base components as needed, it is easy to customize the user experience by adding your own CSS. The addition of a custom CSS will create a user experience that can fit more naturally into your existing business environment and even include your company logo and particular company background colors and fonts.

Customized Reporting

Another feature of open source GPS tracking systems is the ability to generate custom reports based on your specific data needs. For Open GTS, since it is based on XML for its underlying reporting structure, reports can be configured to provide data on a particular historical period, a particular set of vehicles in the fleet or even one vehicle in the fleet.

Geofencing

Geofenced areas, also known as geozones, are geographic parameters in which your fleet of vehicles is allowed to operate. Customizable geofencing zones allow users of the open source GPS systems to define their own areas of operation and change them as their business grows. Multiple geozones can also be defined and identified with a custom name for better organization of all of your different areas of operation.

Customizable Map Providers

Open GTS allows users to integrate a number of mapping programs including Google Maps and Microsoft’s mapping application Virtual Earth. The Mapstraction engine is also supported, which powers the popular mapping applications Map24 and MapQuest.

Operating System Independence

Since open source GPS tracking systems are web application tools, they are able to run on any operating system. The Open GTS tool is built on the Apache Tomcat application server using the java runtime environment and uses MySQL for its relational database.

Localization and Compliance

GPS tracking systems and Open GTS in particular must support easy options for localizing their interfaces and language support. In addition, Open GTS complies with all i18n compliance standards for internationalization and localization.

Baca Selengkapnya ....

11 Basic Linux NMAP command Examples for System administrators

Posted by Unknown Minggu, 21 Oktober 2012 0 komentar
http://www.linuxnix.com/2009/11/nmap-with-examples.html


NMAP(Network Mapping) is one of the important network monitoring tool. Which checks for what ports are opened on a machine.
Some important to note about NMAP
  • NMAP abbreviation is network mapper
  • NMAP is used to scan ports on a machine, either local or remote machine (just you require IP/hostname to scan).
  • NMAPis can be installed on windows, Sun Solaris machines too.
  • NMAPcan be used to scan large networks, remember I am saying large networks.
  • NMAPcan be used to get operating system details such as open ports, software used for a service and its version no, vendor of network card and up time of that system too(Don’t worry we will see all these things in this post.
  • Please do not try to use NMAP on machines which you don’t have permission.
  • Can be used by hackers to scan for systems for vulnerability.
  • Just a funny note : You can see this NMAP used by Trinity in Matrix-II movie, when she tries to hack in to electric grid super computer.
Note : MAN pages of NMAP is one of the best man pages I have come across. It is explained in such a way that even new user can understand what each option do and one more thing is that, it even have examples in to on how to use NMAP in different situations, when you have time read it. You will get lots of information.
Let us start with some examples to better understand nmap command:
  1. Check for particular port on local machine.
  2. Use nmap to scan local machine for open ports.
  3. Nmap to scan remote machines for open ports.
  4. Nmap to scan entire network for open ports.
  5. Scan only ports with -F option.
  6. Scan a machine with -v option for verbose mode.
  7. Scan a machine for TCP protocol open ports.
  8. Scan a machine for UDP protocol open ports.
  9. Scan a machine for services and their software versions.
  10. Scan for open Protocols such as TCP, UDP, ICMP, IGMP etc on a machine.
  11. Scan a machine for to check what operating system its running.
Example1 : Scanning for a single port on a machine
nmap –p portnumber hostname
Example:
nmap -p 53 192.168.0.1
Starting Nmap 5.21 ( http://nmap.org )
Nmap scan report for localhost (192.168.0.1)
Host is up (0.000042s latency).
PORT STATE SERVICE
53/tcp open domain
Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds
The above example will try to check 53(DNS) port is open on 192.168.0.1 port or not.
Example2 : Scan entire machine for checking open ports.
nmap hostname
Example:
nmap 192.168.0.1
Starting Nmap 5.21 ( http://nmap.org )
Nmap scan report for localhost (192.168.0.1)
Host is up (0.00037s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
53/tcp open domain
631/tcp open ipp
Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds
Example3 : Scan remote machine for open ports
nmap remote-ip/host
Example:
nmap 192.168.0.2
Starting Nmap 5.21 ( http://nmap.org )
Nmap scan report for localhost (192.168.0.2)
Host is up (0.00037s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
53/tcp open domain
631/tcp open ipp
Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds
Example4: Scan entire network for IP address and open ports.
nmap network ID/subnet-mask
Example:
nmap 192.168.1.0/24
Starting Nmap 5.21 ( http://nmap.org )
Nmap scan report for 192.168.1.1
Host is up (0.016s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
23/tcp open telnet
53/tcp open domain
80/tcp open http
5000/tcp open upnp
Nmap scan report for 192.168.1.2
Host is up (0.036s latency).
All 1000 scanned ports on 192.168.1.2 are closed
Nmap scan report for 192.168.1.3
Host is up (0.000068s latency).
All 1000 scanned ports on 192.168.1.3 are closed
Nmap done: 256 IP addresses (3 hosts up) scanned in 22.19 seconds
Example5: Scan just ports, dont scan for IP address, hardware address, hostname, operating system name, version, and uptime etc. It’s very much fast as it said in man pages etc. We observed in our tests that it is 70% faster in scan ports when compared to normal scan.
nmap –F hostname
-F is for fast scan and this will not do any other scanning.
Example:
nmap -F 192.168.1.1
Starting Nmap 5.21 ( http://nmap.org ) 
Nmap scan report for 192.168.1.1
Host is up (0.028s latency).
Not shown: 96 closed ports
PORT STATE SERVICE
23/tcp open telnet
53/tcp open domain
80/tcp open http
5000/tcp open upnp
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
Example6: Scan the machine and give as much details as possible.
nmap -v hostname
Example:
nmap -v 192.168.1.1
Starting Nmap 5.21 ( http://nmap.org )
Initiating Ping Scan at 13:31
Scanning 192.168.1.1 [2 ports]
Completed Ping Scan at 13:31, 0.00s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 13:31
Completed Parallel DNS resolution of 1 host. at 13:31, 0.00s elapsed
Initiating Connect Scan at 13:31
Scanning 192.168.1.1 [1000 ports]
Discovered open port 53/tcp on 192.168.1.1
Discovered open port 80/tcp on 192.168.1.1
Discovered open port 23/tcp on 192.168.1.1
Discovered open port 5000/tcp on 192.168.1.1
Completed Connect Scan at 13:31, 0.21s elapsed (1000 total ports)
Nmap scan report for 192.168.1.1
Host is up (0.014s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
23/tcp open telnet
53/tcp open domain
80/tcp open http
5000/tcp open upnp
Read data files from: /usr/share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds
 Example7 : Scan a machine for TCP open ports
nmap –sT hostname
Here s stands for scanning and T is for scanning TCP ports only
Example:
nmap -sT 192.168.1.1
Starting Nmap 5.21 ( http://nmap.org )
Nmap scan report for 192.168.1.1
Host is up (0.022s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
23/tcp open telnet
53/tcp open domain
80/tcp open http
5000/tcp open upnp
Nmap done: 1 IP address (1 host up) scanned in 0.28 seconds
Example8 : Scan a machine for UDP open ports.
nmap –sU hostname
Here U indicates UDP port scanning. This scanning requires root permissions.
Exmaple9 : Scanning for ports and to get what is the version of different services running on that machine
nmap –sV hostname
s stands for scaning and V indicates version of each network service running on that host
Example:
nmap -sV 192.168.1.1
Starting Nmap 5.21 ( http://nmap.org )
Stats: 0:00:06 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan
Service scan Timing: About 0.00% done
Nmap scan report for localhost (192.168.1.1)
Host is up (0.000010s latency).
Not shown: 998 closed ports
PORT STATE SERVICE VERSION
53/tcp open domain dnsmasq 2.59
631/tcp open ipp CUPS 1.5
Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 6.38 seconds
Example10 : To check which protocol(not port) such as TCP, UDP, ICMP etc is supported by the remote machine. This -sO will give you the protocol supported and its open status.
#nmap –sO hostname
Example:
nmap -sO localhost
Starting Nmap 5.21 ( http://nmap.org )
Nmap scan report for localhost (127.0.0.1)
Host is up (0.14s latency).
Not shown: 249 closed protocols
PROTOCOL STATE SERVICE
1 open icmp
2 open igmp
6 open tcp
17 open udp
103 open|filtered pim
136 open|filtered udplite
255 open|filtered unknown
Nmap done: 1 IP address (1 host up) scanned in 2.57 seconds
 Example11 : To scan a system for operating system and uptime details
nmap -O hostname
-O is for operating system scan along with default port scan
Example:
nmap -O google.com
Starting Nmap 5.21 ( http://nmap.org ) 
Nmap scan report for google.com (74.125.236.168)
Host is up (0.021s latency).
Hostname google.com resolves to 11 IPs. Only scanned 74.125.236.168
rDNS record for 74.125.236.168: maa03s16-in-f8.1e100.net
Not shown: 997 filtered ports
PORT STATE SERVICE
80/tcp open http
113/tcp closed auth
443/tcp open https
Device type: general purpose|WAP
Running (JUST GUESSING) : FreeBSD 6.X (91%), Apple embedded (85%)
Aggressive OS guesses: FreeBSD 6.2-RELEASE (91%), Apple AirPort Extreme WAP v7.3.2 (85%)
No exact OS matches for host (test conditions non-ideal).
OS detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 16.23 seconds
Some sites to refer (not for practical examples, but for to get good concept):
nmap.org : official site for ourNMAP
en.wikipedia.org/wiki/Nmap

Baca Selengkapnya ....

Understanding Linux / Unix Filesystem Inode

Posted by Unknown Kamis, 18 Oktober 2012 0 komentar
http://www.geekride.com/understanding-unix-linux-filesystem-inodes


Inode, short form of Index Node is what the whole Linux filesystem is laid on. Anything which resides in the filesystem is represented by Inodes. Just take an example of an old school library which still works with a register having information about their books and their location, like which cabinet and which row, which books resides and who is the author of that book. In this case, the line specific to one book is Inode. In the same way Inodes stores objects, which we will study in detail below.
So, in the linux system, the filesystem mainly consists of two parts, first is the metadata and the second part is the data itself. Metadata, in other words is the data about the data. Inodes takes care of the metadata part in the filesystem.

Inode Basics:

So, as I said, every file or directory in the filesystem is associated with an Inode. An Inode is a data structure, and it stores the following information about the destination:
  • Size of file (In bytes)
  • Device ID (Device containing the file)
  • User ID (of the owner)
  • Group ID
  • File Modes (how owner, group or others could access the file)
  • Extended Attributes (like ACL)
  • Files access, change or modification time stamps
  • Link Count (no of hard links pointing to the inode … remember, no soft links are counted here)
  • Pointer to the disk block that stores the content.
  • File type (whether file, directory or special block device)
  • Block size of the filesystem
  • No. of blocks file us using.
Linux filesystem never stores the file creation time, though lot of people get confused in that. The complete explanation about the various time stamps stored in inode are explained in this article.
A typical inode data will look something like this:
# stat 01
Size: 923383 Blocks: 1816 IO Block: 4096 regular file
Device: 803h/2051d Inode: 12684895 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2012-09-07 01:46:54.000000000 -0500
Modify: 2012-04-27 06:22:02.000000000 -0500
Change: 2012-04-27 06:22:02.000000000 -0500

How / When Inodes are created ?

Creation of Inodes depends on the filesystem you are using. Some filesystem like ext3 creates the Inodes when the filesystem is created, hence having a limited number of Inodes while others like JFS and XFS creates Inodes at the filesystem creation also, but uses dynamic Inode allocation and can increase the number of Inodes according to the need, hence avoiding the situation where all the Inodes gets used up.

What happen when someone tries to access a File:

When a user tries to access the file or any information related to the file then he/she uses the file name to do so but internally the file-name is first mapped with its Inode number stored in a directory table. Then through that Inode number the corresponding Inode is accessed. There is a table (Inode table) where this mapping of Inode numbers with the respective Inodes is provided.

Inode Pointer Structure:

So, as already explained, Inodes only stores metadata information of the file, including the information of the blocks where the real data of the file is stored. This is where Inode Pointer Structure explained.
As explained in the Wikipedia Article, the structure could have 11 to 13 pointers, but most file system store data structure in 15 pointers. These 15 pointers consists of:
  • Twelve pointers that directly point to blocks of the file’s data, called as direct pointers.
  • One singly indirect pointer, which points to a block of pointers that then point to blocks of the file’s data.
  • One doubly indirect pointer, which points to a block of pointers that point to other blocks of pointers that then point to blocks of the file’s data.
  • One triply indirect pointer, which points to a block of pointers that point to other blocks of pointers that point to other blocks of pointers that then point to blocks of the file’s data.
The above things can be explained in a diagram like this:
Inode Pointer Structure
Inode Pointer Structure (From wikipedia, Wikimedia Commons license)

FAQs:

Q. How do I define inode in one line ?
A. An inode is a data structure on a traditional Unix-style file system such as UFS or ext3. An inode stores basic information about a regular file, directory, or other file system object.
Q. How can I see files or directory Inode number ?
A. You can use “stat” command to see the information or you can use “-i” argument with “ls” command to see the inode number of a file.
Q. How to find the total number of Inodes in a Filesystem and the usage of Inodes ?
A. “df -i” command will tell you the stats about the total number, number used and free Inodes.
Q. Why Inode information doesn’t contain the filename ?
A. Inodes store information which are unique to an Inode. In case of a hard link, an Inode could have 2 different file names pointing to the same Inode. So, it’s better not to store the filename inside an Inode.
Q. What if Inode have no links ?
A. Inode having no links (means 0 links), is removed from the filesystem and the resources are freed for reallocation but deletion must wait until all processes that have opened it finish accessing it.
Q. Does Inode change when we move a file from one location to another ?
A. The Inode number stays the same even when we move the file from one location to another only if it’s on the same filesystem. If it’s across filesystem, then the Inode number changes.
Q. What happens when we create a new file or directory, does it create a new Inode ?
A. No, when we create a File or directory, it will just use a already created Inode space, and update the information, but won’t create a new Inode. Inodes are only created at filesystem creation time (exception about some other filesystems, which is explained above)
Q. Can I find a file from an Inode number?
A. Yes, by using the following command
# find / -inum inode-number -exec ls -l {} \;
Using the same command and replacing “ls” command with “rm” command, you can remove a file also on the basis on inode number
# find / -inum inode-number -exec rm -f {} \;

Reference:

  1. Wikipedia
  2. Linux Magazine

Baca Selengkapnya ....
Trik SEO Terbaru support Online Shop Baju Wanita - Original design by Bamz | Copyright of android japan.