ad

Minggu, 20 Januari 2013

How To Find Your IP Address . DNS Address . IPv4 . IPv6

IP address (Internet Protocol address) is a unique address that certain electronic devices use in order to identify and communicate with each other on a computer network utilizing the Internet Protocol standard (IP)—in simpler terms, a computer address. Any participating network device—including routers, computers, time-servers, printers, Internet fax machines, and some telephones—can have their own unique address. An IP address can also be thought of as the equivalent of a street address or a phone number ( compare: VoIP (voice over (the) internet protocol)) for a computer or other network device on the Internet. Just as each street address and phone number uniquely identifies a building or telephone, an IP address can uniquely identify a specific computer or other network device on a network. An IP address differs from other contact information, however, because the linkage of a user's IP address to his/her name is not publicly available information. IP addresses can appear to be shared by multiple client devices either because they are part of a shared hosting web server environment or because a network address translator (NAT) or proxy server acts as an intermediary agent on behalf of its customers, in which case the real originating IP addresses might be hidden from the server receiving a request. A common practice is to have a NAT hide a large number of IP addresses, in the private address space defined by RFC 1918, an address block that cannot be routed on the public Internet. Only the "outside" interface(s) of the NAT need to have Internet-routable addresses. Most commonly, the NAT device maps TCP or UDP port numbers on the outside to individual private addresses on the inside. Just as there may be site-specific extensions on a telephone number, the port numbers are site-specific extensions to an IP address. IP addresses are managed and created by the Internet Assigned Numbers Authority (IANA). The IANA generally allocates super-blocks to Regional Internet Registries, who in turn allocate smaller blocks to Internet service providers and enterprises. DNS Address: On the Internet, the Domain Name System (DNS) associates various sorts of information with so-called domain names; most importantly, it serves as the "phone book" for the Internet: it translates human-readable computer hostnames, e.g. en.wikipedia.org, into the IP addresses that networking equipment needs for delivering information. It also stores other information such as the list of mail exchange servers that accept email for a given domain. In providing a worldwide keyword-based redirection service, the Domain Name System is an essential component of contemporary Internet use. Uses : The most basic use of DNS is to translate hostnames to IP addresses. It is in very simple terms like a phone book. For example, if you want to know the internet address of en.wikipedia.org, the Domain Name System can be used to tell you it is 66.230.200.100. DNS also has other important uses. Pre-eminently, DNS makes it possible to assign Internet destinations to the human organization or concern they represent, independently of the physical routing hierarchy represented by the numerical IP address. Because of this, hyperlinks and Internet contact information can remain the same, whatever the current IP routing arrangements may be, and can take a human-readable form (such as "wikipedia.org") which is rather easier to remember than an IP address (such as 66.230.200.100). People take advantage of this when they recite meaningful URLs and e-mail addresses without caring how the machine will actually locate them. The Domain Name System distributes the responsibility for assigning domain names and mapping them to IP networks by allowing an authoritative server for each domain to keep track of its own changes, avoiding the need for a central registrar to be continually consulted and History : The practice of using a name as a more human-legible abstraction of a machine's numerical address on the network predates even TCP/IP, and goes all the way to the ARPAnet era. Back then however, a different system was used, as DNS was only invented in 1983, shortly after TCP/IP was deployed. With the older system, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI (now SRI International). The HOSTS.TXT file mapped numerical addresses to names. A hosts file still exists on most modern operating systems, either by default or through configuration, and allows users to specify an IP address (eg. 192.0.34.166) to use for a hostname (eg. www.example.net) without checking DNS. As of 2006, the hosts file serves primarily for troubleshooting DNS errors or for mapping local addresses to more organic names. Systems based on a hosts file have inherent limitations, because of the obvious requirement that every time a given computer's address changed, every computer that seeks to communicate with it would need an update to its hosts file. The growth of networking called for a more scalable system: one that recorded a change in a host's address in one place only. Other hosts would learn about the change dynamically through a notification system, thus completing a globally accessible network of all hosts' names and their associated IP Addresses. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications appear in RFC 882 and 883. In 1987, the publication of RFC 1034 and RFC 1035 updated the DNS specification and made RFC 882 and RFC 883 obsolete. Several more-recent RFCs have proposed various extensions to the core DNS protocols. In 1984, four Berkeley students — Douglas Terry, Mark Painter, David Riggle and Songnian Zhou — wrote the first UNIX implementation, which was maintained by Ralph Campbell thereafter. In 1985, Kevin Dunlap of DEC significantly re-wrote the DNS implementation and renamed it BIND (Berkeley Internet Name Domain, previously: Berkeley Internet Name Daemon). Mike Karels, Phil Almquist and Paul Vixie have maintained BIND since then. BIND was ported to the Windows NT platform in the early 1990s. Due to BIND's long history of security issues and exploits, several alternative nameserver/resolver programs have been written and distributed in recent years. How DNS Work In The Theory : The domain name space consists of a tree of domain names. Each node or branch in the tree has one or more resource records, which hold information associated with the domain name. The tree sub-divides into zones. A zone consists of a collection of connected nodes authoritatively served by an authoritative DNS nameserver. (Note that a single nameserver can host several zones.) When a system administrator wants to let another administrator control a part of the domain name space within his or her zone of authority, he or she can delegate control to the other administrator. This splits a part of the old zone off into a new zone, which comes under the authority of the second administrator's nameservers. The old zone becomes no longer authoritative for what goes under the authority of the new zone. A resolver looks up the information associated with nodes. A resolver knows how to communicate with name servers by sending DNS requests, and heeding DNS responses. Resolving usually entails iterating through several name servers to find the needed information. Some resolvers function simplistically and can only communicate with a single name server. These simple resolvers rely on a recursing name server to perform the work of finding information for them. IPv4: Internet Protocol version 4 is the fourth iteration of the Internet Protocol (IP) and it is the first version of the protocol to be widely deployed. IPv4 is the dominant network layer protocol on the Internet and apart from IPv6 it is the only protocol used on the Internet. It is described in IETF RFC 791 (September 1981) which made obsolete RFC 760 (January 1980). The United States Department of Defense also standardized it as MIL-STD-1777. IPv4 is a data-oriented protocol to be used on a packet switched internetwork (e.g., Ethernet). It is a best effort protocol in that it does not guarantee delivery. It does not make any guarantees on the correctness of the data; It may result in duplicated packets and/or packets out-of-order. These aspects are addressed by an upper layer protocol (e.g., TCP, and partly by UDP). The entire purpose of IP is to provide unique global computer addressing to ensure that two computers communicating over the Internet can uniquely identify one another. Addressing : IPv4 uses 32-bit (4-byte) addresses, which limits the address space to 4,294,967,296 possible unique addresses. However, some are reserved for special purposes such as private networks (~18 million addresses) or multicast addresses (~1 million addresses). This reduces the number of addresses that can be allocated as public Internet addresses. As the number of addresses available are consumed, an IPv4 address shortage appears to be inevitable, however Network Address Translation (NAT) has significantly delayed this inevitability. This limitation has helped stimulate the push towards IPv6, which is currently in the early stages of deployment and is currently the only contender to replace IPv4. Allocation : Originally, the IP address was divided into two parts: * Network id : first octet * Host id : last three octets This created an upper limit of 256 networks. As the networks began to be allocated, this was soon seen to be inadequate. To overcome this limit, different classes of network were defined, in a system which later became known as classful networking. Five classes were created (A, B, C, D, & E), three of which (A, B, & C) had different lengths for the network field. The rest of the address field in these three classes was used to identify a host on that network, which meant that each network class had a different maximum number of hosts. Thus there were a few networks with lots of host addresses and numerous networks with only a few addresses. Class D was for multicast addresses and class E was reserved. Around 1993, these classes were replaced with a Classless Inter-Domain Routing (CIDR) scheme, and the previous scheme was dubbed "classful", by contrast. CIDR's primary advantage is to allow re-division of Class A, B & C networks so that smaller (or larger) blocks of addresses may be allocated to entities (such as Internet service providers, or their customers) or Local Area Networks. The actual assignment of an address is not arbitrary. The fundamental principle of routing is that address encodes information about a device's location within a network. This implies that an address assigned to one part of a network will not function in another part of the network. A hierarchical structure, created by CIDR and overseen by the Internet Assigned Numbers Authority (IANA) and its Regional Internet Registries (RIRs), manages the assignment of Internet address worldwide. Each RIR maintains a publicly searchable WHOIS database that provides information about IP address assignments; information from these databases plays a central role in numerous tools that attempt to locate IP addresses geographically. IPv6: Internet Protocol version 6 (IPv6) is a network layer protocol for packet-switched internetworks. It is designated as the successor of IPv4, the current version of the Internet Protocol, for general use on the Internet. The main improvement brought by IPv6 is a much larger address space that allows greater flexibility in assigning addresses. While IPv6 could support 2128 (about 3.4׳1038) addresses, or approximately 5׳1028 addresses for each of the roughly 6.5 billion people[1] alive today. It was not the intention of IPv6 designers, however, to give permanent unique addresses to every individual and every computer. Rather, the extended address length eliminates the need to use network address translation to avoid address exhaustion, and also simplifies aspects of address assignment and renumbering when changing providers. Introduction : By the early 1990s, it was clear that the change to a classless network introduced a decade earlier was not enough to prevent IPv4 address exhaustion and that further changes to IPv4 were needed.[2] By the winter of 1992, several proposed systems were being circulated and by the fall of 1993, the IETF announced a call for white papers (RFC 1550) and the creation of the "IP, the Next Generation" (IPng Area) of working groups.[2][3] IPng was adopted by the Internet Engineering Task Force on July 25, 1994 with the formation of several "IP Next Generation" (IPng) working groups.[2] By 1996, a series of RFCs were released defining IPv6, starting with RFC 2460. (Incidentally, IPv5 was not a successor to IPv4, but an experimental flow-oriented streaming protocol intended to support video and audio.) It is expected that IPv4 will be supported alongside IPv6 for the foreseeable future. IPv4-only nodes (clients or servers) will not be able to communicate directly with IPv6 nodes, and will need to go through an intermediary Features of IPv6 : [edit] To a great extent, IPv6 is a conservative extension of IPv4. Most transport- and application-layer protocols need little or no change to work over IPv6; exceptions are applications protocols that embed network-layer addresses (such as FTP or NTPv3). Applications, however, usually need small changes and a recompile in order to run over IPv6. Larger address space : The main feature of IPv6 that is driving adoption today is the larger address space: addresses in IPv6 are 128 bits long versus 32 bits in IPv4. The larger address space avoids the potential exhaustion of the IPv4 address space without the need for network address translation (NAT) and other devices that break the end-to-end nature of Internet traffic. NAT may still be necessary in rare cases, but Internet engineers recognize that it will be difficult in IPv6 and are trying to avoid it whenever possible. It also makes administration of medium and large networks simpler, by avoiding the need for complex subnetting schemes. Subnetting will, ideally, revert to its purpose of logical segmentation of an IP network for optimal routing and access. The drawback of the large address size is that IPv6 carries some bandwidth overhead over IPv4, which may hurt regions where bandwidth is limited (header compression can sometimes be used to alleviate this problem). IPv6 addresses are harder to memorize than IPv4 addresses, although even IPv4 addresses are much harder to memorize than Domain Name System (DNS) names. DNS protocols have been modified to support IPv6 as well as IPv4. Stateless auto configuration of hosts : IPv6 hosts can be configured automatically when connected to a routed IPv6 network. When first connected to a network, a host sends a link-local multicast request for its configuration parameters; if configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters. If IPv6 autoconfiguration is not suitable, a host can use stateful autoconfiguration (DHCPv6) or be configured manually. Stateless autoconfiguration is only suitable for hosts: routers must be configured manually or by other means IPv6 scope : IPv6 defines 3 unicast address scopes: global, site, and link. Site-local addresses are non-link-local addresses that are valid within the scope of an administratively-defined site and cannot be exported beyond it. Companion IPv6 specifications further define that only link-local addresses can be used when generating ICMP Redirect Messages [ND] and as next-hop addresses in most routing protocols. These restrictions do imply that an IPv6 router must have a link-local next-hop address for all directly connected routes (routes for which the given router and the next-hop router share a common subnet prefix).

Senin, 14 Januari 2013

How on Earth do Companies and Organisations lose Data?


engrossed with the concept of data backup and data loss. There is almost an almost unlimited amount of information covering subjects such as how to keep computer and server data secure, how to back-up data, how to restore data, how to replicate data, who will remove your backup tapes to an remote location, which companies can restore data from corrupt disks. With all this information and data technology available I keep asking myself the same question over and over again, how and why is data terminally lost?

Much information on one web site does tend to contradict information on another site, but after all, most companies with sites of this nature are trying to sell you their solution so you will never get an overall unbiased picture, but there is one underlying fact which we can not get away from, a fact that is stated on most sites and the following fact that I totally agree with is, “Data is the lifeline of all companies and organisations; if data is terminally lost then the chances of a company trading efficiently or even surviving after 2 years of the disaster not at all good”. There are various statistics relating to this fact, but it is accepted that 50% to 70% of companies will go out of business within 1 year if data is lost. It is also acknowledged that everything within a business can be replaced, desks, chairs, buildings, even people can be replaced, everything but the data. Imagine, you sit on a chair and it breaks, you buy another one, you get the picture but imagine the consequences of data loss, you don’t know who your customers are, you don’t know who owes you money, you don’t know who you owe money to, you don’t know what orders you have to ship or who you are supposed to be visiting. Dire times are ahead.

Taking all of the above into account I go back to the original question, if people, companies or organisations understand the true value and importance of their data, how do they manage to lose it? Data loss is totally unnecessary and unacceptable! I honestly feel most IT administrators do have the best intentions when it comes the preservation of data. In my experience data loss stems from only a few sources, human error, lack of resource or lack of planning. Total data loss should never occur when hardware fails.

Human error – I forgot to take the tapes offsite when fire or flood struck offsite tape removal company incorrectly labelled your tapes, I did not test the tape prior to backup and consequently my data was not backed up, I forgot to backup! I did not put my tapes in a secure safe and they were stolen overnight. My backup server crashed in the middle of a backup. The web is littered with stories like this; they are almost a comedy of errors. Responsibility for data backup cannot ever be placed upon the shoulders of one person or one team of people, if there is room for human error, then you have a flaw within you backup system.

Lack of resources – If you work for a corporate, your backup and restore system will have probably cost many hundreds of thousands of pounds. You will enjoy an automated disk to disk system replicated to a second remote location. Even though you are replicating to an offsite location, for added security, you will most likely use this second site to backup to tape. This replicated infrastructure is way out of budget for small to medium enterprises backup to tape is still not a cheap task, by the time you have purchased a tape drive, server to fit the drive into, tape media and the backup software you will have spent at least £3,000, plus backing up to tape in the conventional way is still prone to the same potential human errors. The humans that make the error also have to be paid, if a backup solution is not automated then you will have to employ someone to take care of this, I would prefer my staff to be carrying out more proactive, revenue generating tasks.

Lack of planning – probably a harsh way of putting it, a lack of understanding potential disasters may sound more forgiving. It all amounts to the same thing, but a lack of planning was recently highlighted in the UK when the Bunsfield oil terminal exploded. Companies who backed up to tape and secured said tapes in a fire and water proof safe were cruelly exposed when their building collapsed as a result of the explosion, backup tapes could not be recovered for weeks as they were in the safe under hundreds of tonnes of rubble. Their ability to trade was rendered impossible. Most companies feel that this situation will never happen to them, but let’s put this situation into context, Bunsfield is only the fifth largest and one of over 40 oil terminals and depots in the UK all of which are near major towns and cities, so it could happen to you. This case is one of many where buildings have been destroyed and data has been lost as a result.

If you are still with me you will notice I have only mentioned reasons for and ways of losing data. The rapid growth of cheap high speed internet connectivity and consequently the greatly reduced cost of highly secure, fully automated offsite backup solutions is the reason I ask “how can companies and organisations lose data?”

Offsite backup, also known as remote backup, online backup, is a solution that ticks all the boxes for a small to medium companies with limited budget and resource. Why is this the case I hear you ask? If you deal with a reputable company, remote online backup is cost effective, solutions start at around £25 per month, totally automated therefore requires no human intervention resulting in no human error and your data will always be available as it will be replicated between two data centres. To summarise you have an enterprise class backup solution at your disposal for less than the cost of a low end tape drive.

To find out more information about secure offsite data backup solution

Data Recovery - It May Not Be To Late After All


· Definition of Data Recovery

The salvaging of lost data or making available the previously damaged data stored on various damaged media such as hard disk drives, magnetic tapes, magnetic disks, zip disks, CD-Rom, flash cards and other storage media is known as Data recovery. For example the data of a file which has been deleted by some accident can be recovered again.

· Data Recovery Service Providers

There are various Data Recovery Service Providers who are working for 24/7 in order to provide their customers with the best available data recovery services. These service providers have highly trained staff and they are continuously improving the process technology and their quality systems. These providers allow their customers to recover every important piece of data from all the popular file systems in case of accidental file deletion, malfunctioning of software, formatted hard drives, damaged or deleted partitions and erased or corrupted by a virus.

These data recovery service providers offer a complete range of solutions for the restoration of lost data. The different softwares used by these providers are compatible for all the Microsoft Windows Editions, Apple Macintosh, Sun Solaris, AIX, Linux and Unix operating system. They also provide data recovery from different file systems such as FAT, NTFS, HFS, HFS+, NWFS, EXT2 and EXT3 file systems. They prevent the various important data of our computer by different data safety and data security softwares from unwanted accessing of data, corruption, modification or deletion by another user or from various harmful programs such as viruses, spy ware, Trojan horses, etc.

· Some common Software used in Data Recovery

There are different data safety and data recovery softwares for hard drive data recovery. Some common software used in Data recovery are mentioned bellow.

Data Recovery with Fire: The FIRE is a portable bootable cdrom based software which can provide an immediate environment to perform data recovery which includes different forensic analysis, virus scanning, incident response and vulnerability assessment.

Data Recovery with LDE: The LDE or the Linux Disc Editor was originally made to recover the deleted files in Linux. The simple curses interface of the LDE resembles very much to the old version of Norton Disk Editor for DOS.

NT Data Recovery: The NT Data Recovery software provides read access to the NTFS drives in the MS DOS environment. The functionality of this software includes support for files with long names and as well as compressed and fragmented files. The NTFS Reader for DOS allows the files to copy them from NTFS to FAT volumes in network drives.

Data Recovery with PC Inspector: The PC Inspector File Recovery software is compatible for NTFS and FAT file systems. It has the capability to recover files with the original date and time stamp and can also restore them to the network drive. They can also recover many lost files even when the header entry is not available. The software can find partitions automatically on FAT systems even if the boot area/sector has been damaged or erased. The PC Inspector File Recovery software offers a user friendly interface that scans the hard drives and automatically recovers the files from the deleted folders.