Wednesday, October 12, 2016

Hydra

In this post, I will continue cracking type of attacks. Cracking email adresses plays an important role in Cyber Security in the sense that almost every platforms like social media platforms, forums and even certain applications predominantly use email addresses to create and then manage accounts. So it is highly critical to keep your passwords safe and periodically changed. Mail servers could also be configured by administrators to not permit bruteforce attacks. After a certain number of failed attempt to login, you would end up having an unresponsive server. That is when you need to dig deeper into figuring out how you can evade this sort of restrictions. Adjusting the waittime of your cracking process could help at this point.

Hydra is a built-in cracking tool in Kali Linux which can work offline and online platforms. It can work in complaince with plenty of platforms. SSH, RDP, HTTP are just few of those. You should take a look at the full list here. There is also a GUI version of this tool namely Hydra-GTK. But, I am going to stick with the console-based one. What I am going to try in this tutorial is that I will simply prepare a wordlist containing the rights password and afterwards execute a bruteforce attack with it.

Let's get back to the program. Hydra is like the previous tools simple and yet practical. There are certain parameters that are relatively common than other ones. To see the full list of options, you can type hydra -h.


(-R) In case of any mishap or pause, you would like to carry on from where you ended up.
(-S)  It also supports HTTP and HTTPS protocols. When you crack on SSL pages, it's the way to go.
(-s)  Lower case s stands for the port number of the service. Smtp servers usually run on port 465, but it is changeable server to server.
(-l, -L) Lower case l is for singular username cases. Let's say you have three usernames and a wordlist for a certain SSH server but you are not sure what password could belong what. That's when you should add -L parameter.
(-p, -P) Similarly, you have singular and plural options for password here.
(-x) You can generate predefined passwords with this. For example; -x 4:6:A1 generates passwords from 4 to 6 with upper case and numbers. For those who want to dig deeper, take a look at -hydra -h -x.
(-t) Amount of concurrent attempts happening simultaneously.
(-w) Stands for waittime between attempts.
(-v) Verbose mode.
(-V) It shows every login-password pairs attempted.
(-f) Used when you would like to exit after a succesful login found.


These above are common parameters. Every smtp server has a port number on which mail traffic functions. Email management softwares necessitate POP3 and SMTP access. So you should learn what port number your mail server uses. Here is a link you can find a list of mail server list.




I created an account on Yandex mail server beforehand for the sake of this tutorial. The mail account, we are going to work on is pentestingtr@yandex.com and its password is password123. I added this password into the wordlist. I do not recommend a lengthy password list. There's tools like crunch to prepare more specific wordlist for certain targets. Otherwise it would take a very long time to crack. One of the overarching steps in password cracking is to get as much information as you could through social-engineering. It is also a way to guess of the date of birth and name combinations and something like that. You can do that in a more automated way with wordlist generators.





As we predicted, it eventually found the right password at the end succesfully.


Sunday, August 28, 2016

Medusa

After a hiatus, I feel like diving into cracking tools. I thought Medusa would be a nice fit for this purpose. As most of you would know medusa is a cracking tool like ncrack or hydra that can execute bruteforce attacks remotely. It supports sorts of protocols such as SSH, Telnet, HTTP, HTTPS and many more. Before we go, there are certain things we should know about how ftp service works and what sort of limitations or restrictions hosts can apply to these services. What is maximum number for failed login attempts? Is it configured to ban clients after a certain number of failed attempts? If yes, is it permanent or temporary? These facts can pretty much hamper the process when cracking and eventually you decide whether you cannot pull it off or need to add some parameters to see if it works out.

For the sake of this tutorial, I planned to use it with FTP cracking. I have prepared wordlists in Kali Linux on a virtual machine and pre-installed ProFTPD 1.3.5 on my laptop as a server machine. The valid username is username and password is password123.




Medusa is fairly straightforward and has a command-line interface. I should say it has mainly five variables (parameters) in its use. One of them is where you specify host address which -h stands for. It also allows you to have wordlists containing multiple hosts (-H file directory for lists) when you need to work on more than one host which is a pretty rare case I guess. Next, we have a username section whose parameter is -u or -U (upper case for lists). Like aircrack, hydra etc. medusa uses lower case for defined entries. When you are sure it has a username named admin then go ahead and put -u admin. Similarly, it can be applied to password (-p or -P) cases. Let us say you have no clue what these could be. In this instance you should prepare wordlists for both usernames and passwords. You should put a ready-to-use wordlist you can find somewhere or compile a specific list by taking guesses about probable usernames. Taking shots in the dark wouldn't help you without ever knowing one of these at least. I recommend short lists that include words you think are likely valid. Such as admin, root, me, ftp. Last but not least, we need to specify the execution mode which will define the protocol worked on. -M stands for mode. Below, we have a couple of optional parameters to tweak your bruteforce process.






Ftp server might have more than one username. So when medusa find the correct username and password, it never stops until the lists end. It normally tries all username and password pairs in order. If you want it to stop bruteforcing once it finds a valid one, you should add -f parameter. In case of bruteforcing in login pages with HTTPS protocol, it has -s parameter to enable SSL mode. -v stands for verbosity which can be increased up to level six. It also offers you a convenience to resume your previous scan with -Z parameter. Sometimes a password or username comes to your mind and you could want to quickly add it to the scan, there is a practical option with -e parameter. Maybe ftp server could be delicately configured against bruteforce attacks and you get blocked attempting. You can have fine adjustments with sleep duration between retry attempts or maximum retry number before giving up and so forth. For further information, you should check out other parameters yourself and test them out against your target in a vitual machine.

Saturday, July 2, 2016

Dnsenum

This tutorial is over one of the most well-known DNS enumerators named Dnsenum. Dns enumeration is conducted in active information phase to obtain as much detailed information as possible regarding target system. Before getting into DNS enumeration process, it is highly important to know the basics of how DNS servers work. I highly recommend you having enough knowledge on things like Zone Transfer and DNS record types to better understand the concept. It is safe to say that obtaining crucial information about a target can sometimes be pretty easy when a server is misconfigured so that you can easily find out its reverse DNS records, name records, the mail server they use even operation system installed on a target system. Anything can be found with this technique might have critical vulnerability to exploit and help you get into the system. There are a bunch of practical tools that help you gather information about DNS records. DNSrecon, fierce,dnstracer and dig would be a few of these. This tutorial is particularly about DNSenum.
To use the tool you should execute it in the console typing dnsenum. When it comes up you’ll see its parameters. You can see the most used ones down below. Each one has a description in the help menu. So you can try out other ones yourself.
--enum It contains three different commands which are threads, max subdomain number and whois query options. Thread value set to 5, second option is for the number of subdomains scraped from Google which is set to 10 and last but not least it has whois query which you should optionally enable with -w parameter.
--noreverse As it says itself in the help menu, it skips the reverse lookups.
-f This is for bruteforcing. There needs to be the directory of the wordlist file includes subdomain names. You should or use the built-in wordlist in /usr/share/dnsenum/dns.txt either find a better one on the Internet.
-o It can output the results in xml format. I think this function is sort of buggy for this version. It doesn't give proper formatted reports. When I tried converting one into pdf format, online converter gave me a 312 pages long pdf file. Most of pages were blank.
-w For performing whois queries on c class network.






--dnsserver With this parameter, you can indicate what dns server you would like to use for your enumeration. It works with Google unless you specify a DNS server.


This time I run it for a website (zonetransfer.me) that is purposefully misconfigured with zone transfer. It gives us all information with record types of this server.


You would better compare the result you get from dnsenum with the ones obtained with an another tool to make sure you do not miss anything important.

Wednesday, June 8, 2016

DirBuster

In this tutorial, I am going to explain how to find sub-directories which is not publicly available in a target website. It is not that hard to find administrator login pages and other directories you are not supposed to access to in a website.

DirBuster is a well-known scanner that comes in handy whenever you need certain information about a webpage whether it is up or not. Technically every http request occures between client-side and server-side based on http status code. First, client sends a get request then server receives the request and answers the client. When the requested page is available we do not normally see an error in the browser. Code 200 means that everything went well between client and server side without any mishap. Another example would be one of the ones we all know which is 404 (not found) error. This code indicates that there is no page found named whatever you requested. You can find further information about these codes on wikipedia.

Let's get back to DirBuster. This tool was written in Java by James Fisher and is not currently being developed anymore unfortunately. It has a GUI and easy-to-use interface. As to how DirBuster works, it first asks the website if the website has a file or directory with the name asked. After server receives the request it responds to the client with an http code. DirBuster finally determines based on the given http code by the server whether such directory or file exists or not. It steadily asks every words one by one that you predefine in a wordlist. The bigger your wordlist is, the more accurate results you have. You can either use one of the built-in wordlists that is in DirBuster's directory (/usr/share/dirbuster/wordlist) or find a more extensive one on the Internet. By the way, it would be pretty time-consuming depending on the size of wordlist to fully process, considering you run DirBuster with recursive mode.




DirBuster welcomes you with a main interface which should be divided into four section. At the top we have URL section where we enter the target site. It can crawl on both http (port 80) and https (port 443). Second section is where we customize work method and threads setting. Auto switch is preset to turn on and I think it should be applicable in most cases. You could go ahead and find more information about the difference between GET and HEAD methods in the link here. It also lets you adjust thread number. You can increase the number of threads way up to 500. However this might cause the server a bit of workload.  Third section is about wordlist. We have two options here. Pure Brute Force method basically lets you create your own subdirectories. I didn't personally try out Pure Brute Force. To me It doesn't seem to be efficient either. Second scanning type is through wordlists. You should have a look at those by clicking List Info button. And finally in the last part you see some options ticked off by default. When you need to scan both directories and files at the same time, you should pick both of them. 'Be recursive' option as the name of it says itself is a method which it runs recursively for every directory. So it scans every subdirectories one by one. There is also a convenience to directly dig into particular subdirectories without wasting any time. Just stop the scan, get back to the main menu and put the name of subdirectory you want to dig into where it says 'Dir to start with' then start the scan again.





I scanned a website that I randomly picked for this tutorial. I set thread number 71 and picked medium size wordlist. As you can see it found some subdirectories and pages in the main directory. I secondly focused solely on the directory named '2005' and it shows every file and page it found with the response code for each one in a table. Meanwhile, it is also possible to run your scans through proxy and use your credentials for a specific target website. You can find this two options in the Advanced Options window.





That is pretty much how it maps a whole website. It'd be helpful to find admin login pages and ftp directories as well. Consequently, DirBuster is very practical and gives us a simple idea of what kind of hidden pages and directories our target has. 

Thursday, May 26, 2016

Recon-ng 2

Previous post was mainly about Recon-ng. We went over how it functions, input types, how inputs are related and so forth. This time, our focus will be utterly on modules. Since there are lots of modules in it and a few of them require API keys, there are still tons of modules you can run without API key. My recommendation is spending some time with every single one of them. It would really be time-consuming to go cover all of them. We will just go over a few of modules here. Let's get started.

I think very first module we should start off could be Hostname Resolver. You can find it under 'recon/hosts-hosts/resolve'. How it works is quite simple. It resolves domain names to IP addresses. For every module, right before you run them, you can go ahead read through info section and check out input type by typing show inputs. Ever time you can add a specific input which is a host in this case, you should type set source then add the host name right there. For this demonstration, I am going to run this module with yahoo.com. You can see two newly found hosts below. It automatically outputs the result at the end and saves all of them for later use. show hosts command lists every hosts you have found, including the last one . You don't necessarily have to keep every single piece of information. It is easy to remove some of them. Just type 'delete'  'table name' and 'the row id'. For example, delete hosts 2.








Reverse Resolver is another tool that looks up for each IP adress to resolve hostname. This time, input type will be IP adress and output type will be domain names. So, What it basically does is exact opposite of what Hostname Resolver does. You can observe that it updates host table adding some hostnames after it is done.





Next module is Bing API Hostname Enumerator. As you would guess, It requires API key to run. This module collects all subdomains for a target address. I kept yahoo.com as an example and ran the module. Now, we have 135 new subdomains. As it says itself, all process is conducted through bing.com




A domain name has three parts. These are subdomain which we just learn how to get, second level domain and top level domain.




One of the practical tools you would find useful is DNS Public Suffix Brute Forcer. This module bruteforces a list of TDL and SLD names on a domain to find the ones up and running. I set my source xml.com for this one. As far as I remember, It took a while to fully complete. There must be a lot of TLD and SLD extensions in the list. Again, after it is done you could go ahead and list all of them typing show domains. 






Next module is Namecheck.com Username Validator. Although you can collect this type of information by simply going over to bing.com, namecheck.com,etc. it would be pretty difficult to sift through among that amount of data. How it functions is that it simply asks plenty of social networking websites whether a given username exists or not. I suppose 'Robinn' nickname would exist in most of websites.So I put the name 'Robinn' as a source then fire up the module. You can see it finally found sixty three profiles with URL adresses in  30 seconds. In this case, output type is profile which can be seen in profiles table. You should type show profiles and scroll though the list to see all of them.






Another tool you can make good use of is Whois Data Miner. By the way, most of the modules in Recon-ng are written by Tim Tomes in Python. He created a platform for this framework to make it better. There are issues faced and discussions you can be part of. Link is here. Let's get back to where we were. Whois Data Miner helps you find locations and netblocks that makes things easier for you to figure out the IP range of a host. When you don't know what type of input it needs, you should take a look at module's description, just type show info and read the input section. Apparently, what we need is a company name. I am going to go with 'BMW' to see what comes up. You can also randomly put company names to try it out. By the way, I did not find any information about Suzuki.






In the last example I prepared, We are going to convert an actual address to geo coordinate format and then use that info in flickr pushpin module to obtain images shared in a particular location. For this example, I picked Sabanci University located in Turkey. The very first thing you need to do is to acquire the address. You can take advantage of Google or the offical website that you work on. First off, what we are going to is to use Address Geocoder which you can find under recon/locations-locations directory. Set the source with that address and run the module. There supposed to be latitude and longitude values found in the results.  Next step is where you load flickr module. You can simply search the name of it or just type recon/locations-pushpins/flickr. I kept radius value as it was. You should show inputs to make sure your inputs are ok before you run the module.









One of the great features Recon-ng has is that you can get fine reports of what you gather. Lastly, let's prepare a report to see images we got. You can have a look at seven report types in Recon-ng. I think report functions can be dealt with as modules. You can load and run them as you do with other modules. After you set latitude, longitude and radius values, run it and eventually you have a nice-looking html formatted report.



Friday, May 6, 2016

Recon-ng

In this chapter, I am going to be going over one of the useful and powerful reconnaissance tools named recon-ng. In earlier posts, I mentioned active and passive infomation gathering stages and how to conduct information through online services publicly available including Nmap usage. Wouldn't it be nice if we can do most of them in the same tool without going over to websites for Dns queries and other things needed? 

First off, recon-ng is a reconnaissance tool which collects data from online resources like facebook, twitter, shodan. It has a CLI interface somewhat looks like SEToolkit and how it works is quite similar to Metasploit.So It is techically not an explotation framework. Some of the sections in recon-ng what we would call 'modules' require api key to use. For those who don't know, api key is sort of a connection which makes your application remotely engaged with a service so that the service becomes directly integrated into your application. It could be a linkedin query, something that allows you post your entry or searching a specific domain name on websites like shodan. You can acquire your own api keys from these websites when you fill out the section with mentioning what your application you are working on is about. You don't necessarily have to explain its functionality and how it works. 

Besides modules requiring api key, there are several freely-used modules come in handy as well. The ones I like the most is pushpin modules. With these modules, you can find images or videos shared on flickr, instagram, youtube etc. inside a pre-defined radius.

Let's start off with installation. Normally, recon-ng is pre-installed in Kali Linux by default. It is possible to execute the python file for this framework in other linux distros. For more detail check it out on https://bitbucket.org/LaNMaSteR53/recon-ng.git. By the way, this framework is coded by Tim Tomes a.k.a LaNMaSteR53 in python. I guess he is going to keep adding new modules in it in future. To run the framework, you should type 'recon-ng' then click the enter. You'll have a screen like this. 




As you can see, we have a CLI interface kind of resembles SEToolkit welcome screen. Do not try putting any numbers you see there. They do not work. I highly recommend taking a look at the help menu which I think can guide you. For doing this, type 'help' then hit the enter. 

Actually, the first thing you need to know is that things can get messy dealing with hosts, domain names and contacts etc when you make intelligence gathering. In order to make things clear, there is a workspace concept. You can create, delete and see workspaces. By default, there is supposed to be [recon-ng][default] in your command line. When you type 'workspaces' and hit the enter, you see the commands  with workspace. You can create a new one with add selection like 'workspaces add newworkspace'. As you'd probably guess, you should jump into workspaces by selecting them like 'workspaces select newworkspace'.




There are severals of inputs we can add, delete or change like domains, company name, contacts, profiles etc. Those normally change based on the workspace you work on. You can manage all of them typing 'show schema'. Most of inputs are directly connected to each other. 




I previously mentioned api keys and how to use them. Let's first take a look at what kind of api keys can be used with recon-ng. Put 'keys list' and hit enter. You'll see all api list in a table of Name and Value. You can see all command managing api keys. Usage section has list, add and delete options. Then you can play with them until you get the hang of how it works.

We just learned how to add workspaces, api keys and other type of inputs like domains, hosts, contacts. Last thing we need is to find the proper module. Just like in Metasploit you can easily type 'show modules' or 'search ...' for all the modules related to a specific name. It'd be google, resolve, jigsaw and so forth. We'll put the word 'use' and the 'directory' of the module you want. For example, use recon/hosts-hosts/resolve. Each modules has its own information as to how it functions and what sort of input it needs. After loading your module, you then type show options to find out the type of input. Every input or source can be alterable by preference. Type 'set' and 'option name' then 'the value' which could be an email address, a domain name etc. If it is a module that requires a specific input, you should surely take a look at the input typing show inputs. For instance, pushpin modules work with location which means you will have to specify the longitude and latitude. By the way, we have modules named geocode and reverse_geocode that we can convert addresses to longitude and latitude parameters and vice versa. Considering your inputs are valid and your api key is already specified which is not require for every modules, you can type 'run' and click the enter. Recon-ng saves every information at the end of the process. The results you get can be your input for your next work. 




One of the amazing features of Recon-ng is the ability of outputting the information harvested as reports fomatted in csv, html, xlsx, xml etc. You can go ahead and load the one you want. You should consider them as being some sort of like modules, load them, make some alterations and it is ready. There are a few of variables like creator, filename etc which you can name as you'd like and your report file is eventually ready where specified.
Next post, We will go deep more into a few of modules and how to use them.

Saturday, April 23, 2016

Nmap kullanımı

Bu bölümde aktif bilgi toplama evresinde kullanabileceğiniz en güçlü, en faydalı araçlardan biri olan Nmap üzerinde durmaya çalışacağım. Nmap Windows, MacOSX ve Linux üzerine kurulabilen ve bilgi toplama evresinin footprinting ve banner grabbing evresinde oldukça güvenilir sonuçlar verebilen başarılı bir programdır. Nmap terminal üzerinde çalışır ve sağlamış olduğu kolaylıklarla scan işleminizi oldukça farklı ve detaylı hale getirmenizi sağlar.
Nmap hedef hosta çok spesifik paketler yollar ardından hosttan aldığı cevapları derleyerek, kendi bünyesinde değerlendirdikten sonra hangi portun açık, kapalı veya filtreli olduğunu, hangi işletim sisteminin yüklü olduğunu ve yüklü servislerin türünü ve versiyonlarını yüksek oranda doğruluğunu kabul edebileceğimiz bir şekilde bize sunar.

Nmap sürekli geliştirilmeye devam etmektedir ve daha önceden de belirttiğim gibi ücretsiz bir uygulamadır. Teknoloji çok hızlı gelişmekte ve sürekli servislerin versiyonları değişmekte hatta yeni işletim sistemleri kullanılmakta. Bu anlamda yeni hazırlanan işletim sistemlerinin ve servislerin nmap tarafından tanıtılması ve böylece yapılan network tarama işleminde tanımlanamamış sonuçlar alınmaması için karşılaşılan footprinting izlerinin nmap bünyesine eklenmesi gerekir. Bu client tarafından yapılabilecek birşey değildir. Güncellemeler ile bunları edinmiş oluruz. Ancak karşılaştığınız bazı durumlarda Nmap'in geliştirilme safhasına destek olabilirsiniz. Hangi işletim sisteminin veya hangi servisin kurulu olduğunu bildiğiniz bir hedef üzerinde (local network veya internet üzerinden) yapmış olduğunuz bir test sonucunda, Nmap' in bu özellikleri (işletim sistemi türü ve servisi) tanıyamadığını tespit ettiğinizde bu sonuçları Nmap' in resmi websitesi üzerinden paylaşabilirsiniz.Link burada.

Öncelikle şunu belirtmekte yarar var: Yapacağınız tüm scan taramaları hedef host tarafından pek de hoş olarak karşılanmayacak bir durumdur. Bu tıpkı tepeden tırnağa birini elinizle kontrol etmeye benzer. Port taramalarının genel felsefesi önceden belirlenmiş ve çok kullanılan bir takım portlara sinyaller yollama ve sonuçları yorumlama temeline dayanır. Ancak buna bazı tweakler yaparak 65535 portun tamamını taratabilir veya en çok kullanılan ilk 100, 200 veya buna benzer bir sayıda portu taratabilirsiniz. Aynı zamanda belirli bir aralıkta port taraması yapmak da seçenekleriniz arasında olabilir. Bazı nedenlerden dolayı adminler servisleri yüksek rakamlı (nmap top listte olmayan) portlara kurabilirler veya çeşitli güvenlik duvarları kullanabilirler. Bu durumlarda sonuçlar yanıltıcı olabilir. Ancak yapabileceğiniz ufak ayarlamalar ile sonuçları biraz daha solidife etmek mümkün olabilir. Genel olarak hangi portta hangi servisin olabileceğini anlamak adına şu listeye göz atabilirsiniz.

Kali dağıtımı ile yüklü şekilde gelen Nmap programını diğer linux dağıtımlarına da kurmak mümkün. Bunun için apt-get install nmap komutunu çalıştırmanız yeterlidir. Ardından konsola nmap --version yazarak kurmuş olduğunuz Nmap in versiyonunu öğrenebilirsiniz.

Nmap çok kapsamlı bir programdır ve çok ince detayları dahi göz önünde bulundurarak spesifik scan işlemleri yapmanıza olanak sağlar. Bütün komutlar için help bölümde açıklamalar var ancak önemli olanların üzerinde durmaya çalışacağım.

Öğreneceğiniz komutları pratik yapmak için networkunuzda bulunan bir cihaz üzerinden işlemlerinizi yapabilir veya scanme.nmap.org adresini kullanabilirsiniz. Bu adres tamamen yeni kullanıcıların pratik yapmaları için tahsis edilmiştir.

nmap scanme.nmap.org şeklinde hiçbir özellik eklemeksizin scan yaptığınızda nmap top 1000 listesindeki portları tarar. Aynı şekilde nmap 45.33.32.156 şeklinde de aynı aramayı yapabilirsiniz. Scan işlemi bazen uzun sürebilir. Böyle durumlarda enter tuşuna basarak işlemin gidişatı hakkında bilgi alabilirsiniz. Gördüğünüz üzere kaç adet filtreli port bulunduğunu, hangi portta hangi servisin kurulu olduğunu bize raporladı. -v parametresini ekleyerek belirli aralıklar ile prosesin yüzde kaça geldiğini size bildirmesini sağlayabilirsiniz. Eğer bu parametreyi yazmadı iseniz, proses halindeyken v tuşuna basarak onu aktive edebilirsiniz. Verbosity dört kademedir.( -v, -v2, -v3, -v4)




nmap -sV scanme.nmap.org burada ise -sV komutunu kullanarak servis bilgilerini öğrenmeye çalışıyoruz.




-sT parametresi üçlü handshake ile scan yapar. Tcp scan tipi olarak bilinir.
-sS ise SYN stealth scan olarak bilinir. Handhake yoktur. Sadece SYN (Synchronise) pakedi yollar ve verilen cevabı alır. Eğer SYN/ACK pakedi gelirse port açıktır mantığı üzerine kuruludur.
Yapmış olduğunuz scan işlemlerinin raporlarına daha sonra göz atmak isteyebilirsiniz hatta bunları html formatına çevirmek isteyebilirsiniz. Bunun için -oA komutunu kullanmalısınız. Kaydedilecek raporlar terminalde bulunduğunuz dizinin içinde oluşturulacaktır. oA komutunun anlamı output all dur. Yani üç farklı çıktı tipinin herbirinden verir. Bunlar .nmap .gnmap ve .xml dir. (Sadece xml formatı için -oX, gnmap için -oG, nmap çıktısı için -oN) Xml formatının html formatına dönüştürülmesi de mümküntür. Tercih ettiğiniz converter ile bunu yapabilirsiniz veya xsltproc kullanabilirsiniz.




Belirli portlar veya port aralıkları üzerinden scan yapmanın mümkün olduğunu söylemiştik. Bunun birkaç örneği aşağıdaki gibi olabilir.

nmap nmap.scanme.org -p 443 (Sadece 443 nolu port için)
nmap nmap.scanme.org -p 1-2000 (1 ile 2000 arasındaki tüm portlar için)
nmap nmap.scanme.org -p- (65535 adet portun herbiri için işlemi uygular.)

Birden fazla hedefi aynı anda işleme almak için IP adreslerinin arasına boşluk bırakarak yazmanız yeterlidir. Grup taramalarında işlemin dışında olmasını istediğiniz hedefleri teker teker belirtmek de mümkündür. Bunun için --exclude komutu kullanılmalıdır.

nmap 98.138.253.109-162 -T5  --exclude 98.138.157-161 Burada 109 ile 162 nolu IP aralığı arasında bulacağı tüm hostlar için toplu olarak sonuç verecek. Ancak 157 ve 161 ile bitenler liste dışı. Bunu portlar için de kullanabilirsiniz.




Bazı durumlarda hedef sistemlerde çalışan servisler olmasına rağmen, serverın offline olduğu gibi sonuçlar alınabilir. Bu durumda agnostic scan denilen yönteme başvurabilirsiniz. -Pn parametresi ekleyerek daha verimli sonuçlar alınabilinir. Aynı zamanda hız ayarı da yapılabilir.

Sistemin işletim sistemi hakkında bilgi edinmek isterseniz -O parametresini eklemelisiniz. Ancak bulunan sonuçlar yüzde yüz doğru olmayabilir. Çok az da olsa yanılma payı olabilir.

Az önce bir takım hız parametreleri olduğu yazmıştım. Şimdi bunlardan bahsedelim. T0, T1, T2, T3, T4 ve T5 olmak üzere altı kademeli hız ayarımız var. En yavaş olanı T0' dır. Rakam arttıkça scan tamamlanma süresi uzar. Ancak yavaş olan scan daha verimli sonuçlar verebilir. min_rtt_timeout ve max_rtt_timeout süreleri gibi birkaç değişkende farklılıklar vardır. IPS sistemlerine karşı yapılacak scan işlemlerinde tercih edeceğiniz hız parametresinde değişiklik yapmanız gerekebilir. Hatta bunu daha da ileri götürerek IP adresinizi spooflayıp sanki yapılan scan işleminin birden çok kişi tarafından yapıldığı izlenimini verebilirsiniz. Bu işlem decoy olarak da bilinir. Hazırlanan scan komutunun sonunda -D parametresi eklenir. Örnek olarak;



Bir önceki konumuz proxychains idi. Proxychains proxy kullanarak İnternet üzerinden işlem yapmamızı sağladığı hakkında uzunca bahsetmiştik. Nmap scan işlemlerinize Proxychains' i entegre etmeniz mümkün. Bunun için service tor start komutunu çalıştırarak onu aktive edin. Ardından yapacağınız scan işlemi için yazacağınız komutun başına proxychains yazmanız yeterli. proxychains nmap scanme.nmap.org gibi.


Nmap çeşitli script eklentileri ile daha da zengin bir hale getirilebilir. Http-enumeration, traceroute veya snmp sistemlerine karşı kullanılabilecek birçok script mevcut. Biraz daha detaya girmek ve ne tarz scriptler olduğunu keşfetmek için resmi websitesine göz atabilirsiniz.