Website Copier

HTTrack can be used to download any paid Website Templates, How to use HTTrack with an example is explained below, so enjoy your favorite template without paying any penny.

Installing HTTrack

Below is an Example, How to use HTTrack

How Google Search Works ?

Google search engine is undoubtedly most widely used search engine. It was founded by Larry Pageand Sergey Brin. We must have the knowledge of basic working and methodology used by google search engine.  I have explained the things in very simple words.  Read Carefully

Overview :

Okay lets assume , you wanna design a little search engine that would search the requested key words in  few websites (say 5 websites) ,So what would be our approach ? First of all, we will store the contents that is webpages of that 5 websites in our database. Then we will make an index including the important part of these web pages like titles,headings,meta tags etc. Then we would make a simple search box meant for users where they could enter the search query or keyword. User’s entered query will be processed  to match with the keywords in the index and  the results would be returned accordingly. We will return user with list of the links of actual websites and the preference to those websites will be given to them using some algorithm.   I hope the basic overview of  working of search engine is clear to you.
Now read more regarding the same.
A web search engine works basically in the following manner. There are basically three parts.
1. Web Crawling 
2. Indexing 
3. Query processing or searching
1. First step of working of search engine is web crawling. A web crawler or a web spider is a software that travels across the world wide web and downloads,saves webpages. A web crawaler is fed with URLs of websites and it starts proceeding. It starts downloading and saving web pages associated with that websites. Wanna have feel of web crawaler. Download one from here. Feed it with links of websites and it    will start downloading  webpages,images etc associated with those websites. Name of google web crawler is GoogleBot.  Wanna see the copies of webpages saved in google database ? (actually not exactly)
Lets take example of any website , say http://www.wikipedia.org

Do this -:

Go to google. and  search for ‘wikipedia’ Hopefully you would get this link on top.
Click on the ‘cached’ link as shown.
OR
Directly search for ‘cache:wikipedia.org’
Then read the lines at top the page you got and things would be clear to you.
2. After googlebot has saved all pages, it submits them to google indexer. Indexing means extracting out words from titles,headings,metatags etc.The indexed pages are stored in google index database. The contents of index database is similar to the index at the back of your book. Google ignores the common or insignificant words like as,for,the,is,or,on (called as stop words) which are usually in every webpage. Index is done basically to improve the speed of searching.
3. The third part is query processing or searching. It includes the search box where we enter the search query/keyword for which we are looking for. When user enters the serach query, google matches the entered key words in the pages saved in indexed database and returns the actual links of webpages from where those pages are reterived. The priority is obviously given to best matching results. Google uses a patented algorithm called PageRank that helps rank web pages that match a given search string.
The above three steps are followed not only google search but most of the web search engines.Ofcourse there are many variations but methodology is same.
What is Robots.txt ?
Web Administrators do not the web crawlers or Web spiders to fetch every page/file of the website and show the links in search results.Robots.txt is a simple text file meant to be placed in top-level directory of the website which contain the links that web administrators do not want to be fetched by web crawlers. The first step of a Web Crawler is to check the content of Robots.txt

Example of contents of Robots.txt
User-agent: * //for web crawlers of all search engines

Disallow:/directory_name/file_name //specify a file of particular dir.
Disallow:/directory_name/  //all files of particular dir.

You can see robots.txt of  websites (if exists). Example http://www.microsoft.com/robots.txt

A Virus Program to Disable USB Ports

In this post I will show how to create a simple virus that disables/blocks the USB ports on the computer (PC). As usual I use my favorite C programming language to create this virus. Anyone with a basic knowledge of C language should be able to understand the working of this virus program.

Once this virus is executed it will immediately disable all the USB ports on the computer. As a result the you’ll will not be able to use your pen drive or any other USB peripheral on the computer. The source code for this virus is available for download. You can test this virus on your own computer without any worries since I have also given a program to re-enable all the USB ports.

1. Download the USB_Block.rar file on to your computer.

2. It contains the following 2 files.

  • block_usb.c (source code)
  • unblock_usb.c (source code)

3. You need to compile them before you can run it.

3. Upon compilation of block_usb.c you get block_usb.exe which is a simple virus that will block (disable) all the USB ports on the computer upon execution (double click).

4. To test this virus, just run the block_usb.exe file and insert a USB pen drive (thumb drive). Now you can see that your pen drive will never get detected. To re-enable the USB ports just run the unblock_usb.exe  (you need to compile unblock_usb.c) file. Now insert the pen drive and it should get detected.

5. You can also change the icon of this file to make it look like a legitimate program.

What is CAPTCHA and How it Works?

CAPTCHA or Captcha (pronounced as cap-ch-uh) which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart” is a type of challenge-response test to ensure that the response is only generated by humans and not by a computer. In simple words, CAPTCHA is the word verification test that you will come across the end of a sign-up form while signing up for Gmail or Yahoo account. The following image shows the typical samples of CAPTCHA.Almost every Internet user will have an experience of CAPTCHA in their daily Internet usage, but only a few are aware of what it is and why they are used. So in this post you will find a detailed information on how CAPTCHA works and why they are used.

What Purpose does CAPTCHA Exactly Serve?

CAPTCPA is mainly used to prevent automated software (bots) from performing actions on behalf of actual humans. For example while signing up for a new email account, you will come across a CAPTCHA at the end of the sign-up form so as to ensure that the form is filled out only by a legitimate human and not by any of the automated software or a computer bot. The main goal of CAPTCHA is to put forth a test which is simple and straight forward for any human to answer but for a computer, it is almost impossible to solve.

What is the Need to Create a Test that Can Tell Computers and Humans Apart?

For many the CAPTCHA may seem to be silly and annoying, but in fact it has the ability to protect systems from malicious attacks where people try to game the system. Attackers can make use of automated softwares to generate a huge quantity of requests thereby causing a high load on the target server which would degrade the quality of service of a given system, whether due to abuse or resource expenditure. This can affect millions of legitimate users and their requests. CAPTCHAs can be deployed to protect systems that are vulnerable to email spam, such as the services from Gmail, Yahoo and Hotmail.

Who Uses CAPTCHA?

CAPTCHAs are mainly used by websites that offer services like online polls and registration forms. For example, Web-based email services like Gmail, Yahoo and Hotmail offer free email accounts for their users. However upon each sign-up process, CAPTCHAs are used to prevent spammers from using a bot to generate hundreds of spam mail accounts.

Designing a CAPTCHA System

CAPTCHAs are designed on the fact that computers lack the ability that human beings have when it comes to processing visual data. It is more easily possible for humans to look at an image and pick out the patterns than a computer. This is because computers lack the real intelligence that humans have by default. CAPTCHAs are implemented by presenting users with an image which contains distorted or randomly stretched characters which only humans should be able to identify. Sometimes characters are striked out or presented with a noisy background to make it even more harder for computers to figure out the patterns.

Most, but not all, CAPTCHAs rely on a visual test. Some Websites implement a totally different CAPTCHA system to tell humans and computers apart. For example, a user is presented with 4 images in which 3 contains picture of animals and one contain a flower. The user is asked to select only those images which contain animals in them. This Turing test can easily be solved by any human, but almost impossible for a computer.

Breaking the CAPTCHA

The challenge in breaking the CAPTCHA lies in real hard task of teaching a computer how to process information in a way similar to how humans think. Algorithms with artificial intelligence (AI) will have to be designed in order to make the computer think like humans when it comes to recognizing the patterns in images. However there is no universal algorithm that could pass through and break any CAPTCHA system and hence each CAPTCHA algorithm must have to be tackled individually. It might not work 100 percent of the time, but it can work often enough to be worthwhile to spammers.

Get Your Old Facebook Chat back

Facebook, the biggest social network site updated many features like video call and now it changed its chat features as Sidebar chat, but its really annoying the users. We can’t see who are online, it will just show our few friends who are we interact most, it will be most irritating one for who have more number of friends. Its really simple script you have to install in your Browser to get your Old Facebook chat. Lets see how to get back our Favorite Facebook chat.

How to Install Old Facebook Chat Script in Mozilla Firefox

In Mozilla Firefox, First we need to install Grease Monkey Add-on before installing the Old Facebook Script. So Open the Firefox and go to Download page of the Grease Moneky Add-on from the following link. Click here to go to Gease Monkey Add-on Download page. In the Download page Click ‘Add To Firefox’ button.After clicking ADD TO FIREFOX button, the popup will ask about installation. Click INSTALL and Its done.

Now Gease Monkey Add-on is Installed. Now Click here to go to Script Page in the Script Page click Install button which is in the Right top, same as we did above for How to get Old Facebook chat for Google Chrome.

After installing Close the Facebook and Open it again to see the Old Facebook chat is available. if you can’t find, Clear the cache files and try again. Anytime you can remove this script from Gease Monkey add-on to get the New sidebar chat box.

How to Install Old Facebook Chat Script in Google Chrome

In Google Chrome, its really easy task to install the script to enable the Old facebook chat. Just Click the Following link to install Script. CLICK HERE TO GO TO DOWNLOAD SCRIPT PAGE. In the Script page Click ‘INSTALL’ button which is in the Top Right side of the page.Now the Install Popup will appear, in that box CLICK ‘INSTALL’ and now the Script is Installed.Now the Script is Installed. Now Close the Existing Facebook page and Open the Facebook again. You can See your Old Favorite Chat is available. If you cant see the old facebook chat, Clear the cache and open the facebook. Anytime you can remove this script from extension menu of the Google chrome to get the New sidebar chat box.

Hack Web Applications by Intercepting HTTP request/response using WebScarab

Hello Friends,

Today we will understand how we can intercept the HTTP request we send to a website and how we can analyse the response header.For this purpose we will use WebScarab which you can download by searching it on google.

After you have installed the setup you will first have to set your browser so that WebScarab can intercept the request and response.
I am taking the example of Firefox here. Go to options > Advanced > Network > Settings > Then select the Manual Proxy configuration and enter the following values.
HTTP proxy – 127.0.0.1 and port – 8008
This sets the webscarab to intercept the request by acting as a localhost proxy .

Now you start your webScarab by clicking on the icon.
The screen will appear wired and somthing like as shown in the figure. Click on the figure to enlarge it .
In the intercept tab , select “Intercept request” and in the left hand side menu select “Get” and “Post” options .
This makes your webScarab completely ready to intercept the HTTP Get and post requests .Now in your browser type any url , for e.g , google.com and you will get a window that will show the intercepted HTTP Get request. Now if you click on the “Intercept Response” button then it will also intercept the response that is coming back to the browser from the google server.

You can use this technique to analyse the the various request and response headers and let me tell you this can be very very deadly . If you are able to make the right moves and changes in the Headers then you can easily modify the headers to send invalid valuse to the servers .
In the main window of the webScarab , the “Summary” tab shows you the details of all the intercepted requests and response.This is a short tutorial on webScarab that will give you a basic understanding of how to use webscarab to intercept the HTTP values and analyse them > Rest is upto you how far you can take it .