This blog is all about Cyber Security and IT

Saturday, January 26, 2019

Working with Databases in Metasploit



When you’re running a complex penetration test with a lot of targets, keeping
track of everything can be a challenge. Luckily, Metasploit has you covered
with expansive support for multiple database systems.
To ensure that database support is available for your system, you should
first decide which database system you want to run. Metasploit supports
MySQL and PostgreSQL; because PostgreSQL is the default, we’ll stick with
it in this discussion.













To start export the result >>





Use keywords -oX (mean output in XML)









This will create a XML file with name ResultNmap.XML


Nmap with -sS and -Pn



nmap has a quite a few options, but you’ll use just a few of them for the most part.
One of our preferred nmap options is -sS. This runs a stealth TCP scan
that determines whether a specific TCP-based port is open. Another preferred option is -Pn, which tells nmap not to use ping to determine whether a system is running; instead, it considers all hosts “alive.” If you’re performing Internet based penetration tests, you should use this flag, because most networks don’t allow Internet Control Message Protocol (ICMP), which is the protocol that ping uses. If you’re performing this scan internally, you can probably ignore this flag.
Now let’s run a quick nmap scan against our target machine using
both the -sS and -Pn flags.














As you can see, nmap reports a list of open ports, along with a description
of the associated service for each.
For more detail, try using the -A flag. This option will attempt advanced
service enumeration and banner grabbing, which may give you even more
details about the target system. For example, here’s what we’d see if we were
to call nmap with the -sS and -A flags, using our same target system:










Scanner FTP Auxiliary Modules



anonymous





The ftp/anonymous scanner will scan a range of IP addresses searching for FTP servers that allow anonymous access and determines where read or write permissions are allowed.





msf > use auxiliary/scanner/ftp/anonymous
msf auxiliary(anonymous) > show options

Module options:

Name Current Setting Required Description
---- --------------- -------- -----------
FTPPASS mozilla@example.com no The password for the specified username
FTPUSER anonymous no The username to authenticate as
RHOSTS yes The target address range or CIDR identifier
RPORT 21 yes The target port
THREADS 1 yes The number of concurrent threads




Configuring the module is a simple matter of setting the IP range we wish to scan along with the number of concurrent threads and let it run.





msf auxiliary(anonymous) > set RHOSTS 192.168.1.200-254
RHOSTS => 192.168.1.200-254
msf auxiliary(anonymous) > set THREADS 55
THREADS => 55
msf auxiliary(anonymous) > run

[*] 192.168.1.222:21 Anonymous READ (220 mailman FTP server (Version wu-2.6.2-5) ready.)
[*] 192.168.1.205:21 Anonymous READ (220 oracle2 Microsoft FTP Service (Version 5.0).)
[*] 192.168.1.215:21 Anonymous READ (220 (vsFTPd 1.1.3))
[*] 192.168.1.203:21 Anonymous READ/WRITE (220 Microsoft FTP Service)
[*] 192.168.1.227:21 Anonymous READ (220 srv2 Microsoft FTP Service (Version 5.0).)
[*] 192.168.1.204:21 Anonymous READ/WRITE (220 Microsoft FTP Service)
[*] Scanned 27 of 55 hosts (049% complete)
[*] Scanned 51 of 55 hosts (092% complete)
[*] Scanned 52 of 55 hosts (094% complete)
[*] Scanned 53 of 55 hosts (096% complete)
[*] Scanned 54 of 55 hosts (098% complete)
[*] Scanned 55 of 55 hosts (100% complete)
[*] Auxiliary module execution completed
msf auxiliary(anonymous) >




ftp_login





The ftp_login auxiliary module will scan a range of IP addresses attempting to log in to FTP servers.





msf > use auxiliary/scanner/ftp/ftp_login 
msf auxiliary(ftp_login) > show options

Module options (auxiliary/scanner/ftp/ftp_login):

Name Current Setting Required Description
---- --------------- -------- -----------
BLANK_PASSWORDS false no Try blank passwords for all users
BRUTEFORCE_SPEED 5 yes How fast to bruteforce, from 0 to 5
DB_ALL_CREDS false no Try each user/password couple stored in the current database
DB_ALL_PASS false no Add all passwords in the current database to the list
DB_ALL_USERS false no Add all users in the current database to the list
PASSWORD no A specific password to authenticate with
PASS_FILE /usr/share/wordlists/fasttrack.txt no File containing passwords, one per line
Proxies no A proxy chain of format type:host:port[,type:host:port][...]
RECORD_GUEST false no Record anonymous/guest logins to the database
RHOSTS yes The target address range or CIDR identifier
RPORT 21 yes The target port (TCP)
STOP_ON_SUCCESS false yes Stop guessing when a credential works for a host
THREADS 1 yes The number of concurrent threads
USERNAME no A specific username to authenticate as
USERPASS_FILE no File containing users and passwords separated by space, one pair per line
USER_AS_PASS false no Try the username as the password for all users
USER_FILE no File containing usernames, one per line
VERBOSE true yes Whether to print output for all attempts




This module can take both wordlists and user-specified credentials in order to attempt to login.





msf auxiliary(ftp_login) > set RHOSTS 192.168.69.50-254
RHOSTS => 192.168.69.50-254
msf auxiliary(ftp_login) > set THREADS 205
THREADS => 205
msf auxiliary(ftp_login) > set USERNAME msfadmin
USERNAME => msfadmin
msf auxiliary(ftp_login) > set PASSWORD msfadmin
PASSWORD => msfadmin
msf auxiliary(ftp_login) > set VERBOSE false
VERBOSE => false
msf auxiliary(ftp_login) > run

[*] 192.168.69.51:21 - Starting FTP login sweep
[*] 192.168.69.50:21 - Starting FTP login sweep
[*] 192.168.69.52:21 - Starting FTP login sweep
...snip...
[*] Scanned 082 of 205 hosts (040% complete)
[*] 192.168.69.135:21 - FTP Banner: '220 ProFTPD 1.3.1 Server (Debian) [::ffff:192.168.69.135]\x0d\x0a'
[*] Scanned 204 of 205 hosts (099% complete)
[+] 192.168.69.135:21 - Successful FTP login for 'msfadmin':'msfadmin'
[*] 192.168.69.135:21 - User 'msfadmin' has READ/WRITE access
[*] Scanned 205 of 205 hosts (100% complete)
[*] Auxiliary module execution completed
msf auxiliary(ftp_login) >




As we can see, the scanner successfully logged in to one of our targets with the provided credentials.





ftp_version





The ftp_version module simply scans a range of IP addresses and determines the version of any FTP servers that are running.





msf > use auxiliary/scanner/ftp/ftp_version
msf auxiliary(ftp_version) > show options

Module options:

Name Current Setting Required Description
---- --------------- -------- -----------
FTPPASS mozilla@example.com no The password for the specified username
FTPUSER anonymous no The username to authenticate as
RHOSTS yes The target address range or CIDR identifier
RPORT 21 yes The target port
THREADS 1 yes The number of concurrent threads




To setup the module, we just set our RHOSTS and THREADS values and let it run.





msf auxiliary(ftp_version) > set RHOSTS 192.168.1.200-254
RHOSTS => 192.168.1.200-254
msf auxiliary(ftp_version) > set THREADS 55
THREADS => 55
msf auxiliary(ftp_version) > run

[*] 192.168.1.205:21 FTP Banner: '220 oracle2 Microsoft FTP Service (Version 5.0).\x0d\x0a'
[*] 192.168.1.204:21 FTP Banner: '220 Microsoft FTP Service\x0d\x0a'
[*] 192.168.1.203:21 FTP Banner: '220 Microsoft FTP Service\x0d\x0a'
[*] 192.168.1.206:21 FTP Banner: '220 oracle2 Microsoft FTP Service (Version 5.0).\x0d\x0a'
[*] 192.168.1.216:21 FTP Banner: '220 (vsFTPd 2.0.1)\x0d\x0a'
[*] 192.168.1.211:21 FTP Banner: '220 (vsFTPd 2.0.5)\x0d\x0a'
[*] 192.168.1.215:21 FTP Banner: '220 (vsFTPd 1.1.3)\x0d\x0a'
[*] 192.168.1.222:21 FTP Banner: '220 mailman FTP server (Version wu-2.6.2-5) ready.\x0d\x0a'
[*] 192.168.1.227:21 FTP Banner: '220 srv2 Microsoft FTP Service (Version 5.0).\x0d\x0a'
[*] 192.168.1.249:21 FTP Banner: '220 ProFTPD 1.3.3a Server (Debian) [::ffff:192.168.1.249]\x0d\x0a'
[*] Scanned 28 of 55 hosts (050% complete)
[*] 192.168.1.217:21 FTP Banner: '220 ftp3 FTP server (Version wu-2.6.0(1) Mon Feb 28 10:30:36 EST 2000) ready.\x0d\x0a'
[*] Scanned 51 of 55 hosts (092% complete)
[*] Scanned 52 of 55 hosts (094% complete)
[*] Scanned 53 of 55 hosts (096% complete)
[*] Scanned 55 of 55 hosts (100% complete)
[*] Auxiliary module execution completed
msf auxiliary(ftp_version) >

Friday, January 25, 2019

Malvertising


Malvertising, or malicious advertising, is the use of online, malicious advertisements to spread malware and compromise systems. Generally this occurs through the injection of unwanted or malicious code into ads. Malicious actors then pay legitimate online advertising networks to display the infected ads on various websites, exposing every user visiting these sites to the potential risk of infection. Generally, the legitimate advertising networks and websites are not aware they are serving malicious content.

How does malvertising work?


Malicious actors hide a small piece of code deep within a legitimate looking advertisement, which will direct the user’s machine to a malicious or compromised server. When the user’s machine successfully makes a connection to the server, an exploit kit hosted on that server executes. An exploit kit is a type of malware that evaluates a system, determines what vulnerabilities exist on the system, and exploits a vulnerability. From there, the malicious actor is able to install malware by utilizing the security bypass created by the exploit kit. The additional software could allow the attacker to perform a number of actions including, allowing full access to the computer, exfiltrating financial or sensitive information, locking the system and holding it ransom via ransomware, or adding the system to a botnet so it can be used to perform additional attacks. This entire process occurs behind the scenes, out of sight of the user and without any interaction from the user.

The Most Popular Exploit Kit


One of the most popular exploit kits currently in use is the Angler Exploit Kit. Angler employs a number of evasion techniques in order to avoid being detected. For example, the URL of the landing page the user’s computer connects to, where the exploit kit is hosted, is often generated dynamically. This makes it difficult to detect because the URL is constantly changing. Angler also has the functionality to determine if it is being run inside of a virtual machine, thus making it difficult for cybersecurity analysts to perform analysis on it. Finally, multiple layers of obfuscation exist in Angler, built on top of each other with various encoding schemes (base64, RC4, etc.) to hide the code that executes when the vulnerable user visits the server.

Angler uses a variety of vulnerabilities in Adobe Flash, Microsoft Silverlight, and Oracle Java. These are all extremely common extensions running on many popular web browsers. When the user’s computer visits the server hosting the exploit kit, the system is scanned to determine which versions of the above software are running on the user’s browser. From there, Angler picks the best vulnerability for exploiting the victim.

Friday, November 30, 2018

Types of Windows Events


We have 5 types of security events in windows >

Error : When some kind of service failed to execute or there is some loss of information

Warning : This event is generated when there is some problem going to happen in future .  Like  disk space utilization message .

Information : This type of event is generated when there is some informative message , like application services are running accurately

Success audit : This type of  event generated when user successfully logged in to a system

Failure audit : When there is failure in login attempt .

Main security Events













































IDLevelEvent LogEvent Source
App Error1000ErrorApplicationApplication Error
App Hang1002ErrorApplicationApplication Hang
BSOD1001ErrorSystemMicrosoft-Windows-WER-
SystemErrorReporting
WER1001InformationalApplicationWindows Error Reporting
EMET12WarningErrorApplicationApplicationEMET

Hackers need access to your systems just like any other user, so it’s worth looking for suspicious login activity. Table 2 shows events that might show a problem. Pass-the-Hash (PtH) is a popular form of attack that allows a hacker to gain access to an account without needing to know the password. Look out for NTLM Logon Type 3 event IDs 4624 (failure) and 4625 (success).

Table 2 – Account Usage




















































IDLevelEvent LogEvent Source
Account Lockouts4740InformationalSecurityMicrosoft-Windows-Security-
Auditing
User Added to Privileged Group4728, 4732, 4756InformationalSecurityMicrosoft-Windows-Security-
Auditing
Security-Enabled group Modification4735InformationalSecurityMicrosoft-Windows-Security-
Auditing
Successful User Account Login4624InformationalSecurityMicrosoft-Windows-Security-
Auditing
Failed User Account Login4625InformationalSecurityMicrosoft-Windows-Security-
Auditing
Account Login with Explicit Credentials4648InformationalSecurityMicrosoft-Windows-Security-
Auditing

High-value assets, like domain controllers, shouldn’t be managed using Remote Desktop. Logon Type 10 event IDs 4624 (Logon) and 4634 (Logoff) might point towards malicious RDP activity.

Thursday, November 29, 2018

What is Syslog?




Syslog stands for System Logging Protocol and is a standard protocol used to send system log or event messages to a specific server, called a syslog server. It is primarily used to collect various device logs from several different machines in a central location for monitoring and review.










The protocol is enabled on most network equipment such as routers, switches, firewalls, and even some printers and scanners. In addition, syslog is available on Unix and Linux based systems and many web servers including Apache. Syslog is not installed by default on Windows systems, which use their own Windows Event Log. These events can be forwarded via third-party utilities or other configurations using the syslog protocol.

Syslog is defined in RFC 5424, The Syslog Protocol, which obsoleted the previous RFC 3164.



Syslog Components





On any given device various events are generated by the system in response to changing conditions. These events are typically logged locally where they can be reviewed and analyzed by an administrator. However, monitoring numerous logs over an equally numerous number of routers, switches, and systems would be time consuming and impractical. Syslog helps solve this issue by forwarding those events to a centralized server.



Syslog Transmission





Traditionally, Syslog uses the UDP protocol on port 514 but can be configured to use any port. In addition, some devices will use TCP 1468 to send syslog data to get confirmed message delivery.

Syslog packet transmission is asynchronous. What causes a syslog message to be generated is configured within the router, switch, or server itself. Unlike other monitoring protocols, such as SNMP, there is no mechanism to poll the syslog data. In some implementations, SNMP may be used to set or modify syslog parameters remotely.


he syslog message consists of three parts: PRI (a calculated priority value), HEADER (with identifying information), and MSG (the message itself).

The PRI data sent via the syslog protocol comes from two numeric values that help categorize the message. The first is the Facility value. This value is one of 15 predefined values or various locally defined values in the case of 16 to 23. These values categorize the type of message or which system generated the event.















































































NumberFacility description
0Kernel messages
1User-level messages
2Mail System
3System Daemons
4Security/Authorization Messages
5Messages generated by syslogd
6Line Printer Subsystem
7Network News Subsystem
8UUCP Subsystem
9Clock Daemon
10Security/Authorization Messages
11FTP Daemon
12NTP Subsystem
13Log Audit
14Log Alert
15Clock Daemon
16 - 23Local Use 0 - 7



The second label of a syslog message categorizes the importance or severity of the message in a numerical code from 0 to 7.




















































CodeSeverityDescription
0EmergencySystem is unusable
1AlertAction must be taken immediately
2CriticalCritical conditions
3ErrorError conditions
4WarningWarning conditions
5NoticeNormal but significant condition
6InformationalInformational messages
7DebugDebug-level messages




The values of both labels do not have hard definitions. Thus, it is up to the system or application to determine how to log an event (for example, as a warning, notice, or something else) and on which facility. Within the same application or service, lower numbers should correspond to more severe issues relative to the specific process.

The two values are combined to produce a Priority Value sent with the message. The Priority Value is calculated by multiplying the Facility value by eight and then adding the Severity Value to the result. The lower the PRI, the higher the priority.
(Facility Value * 8) + Severity Value = PRI


In this way, a kernel message receives lower value (higher priority) than a log alert, regardless of the severity of the log alert. Additional identifiers in the packet include the hostname, IP address, process ID, app name, and timestamp of the message.
The actual verbiage or content of the syslog message is not defined by the protocol. Some messages are simple, readable text, others may only be machine readable.

Syslog messages are typically no longer than 1024 bytes.





Example of a Syslog Message






<165>1 2003-10-11T22:14:15.003Z mymachine.example.com - ID47 [exampleSDID@32473 iut="3" eventSource=" eventID="1011"] BOMAn application log entry...


Parts of the Syslog Message:






















































PartValueInformation
PRI165Facility = 20, Severity = 5
VERSION1Version 1
TIMESTAMP2017-05-11T21:14:15.003ZMessage created on 11 May 2017 at 09:14:15 pm, 3 milliseconds into the next second
HOSTNAMEmymachine.example.comMessage originated from host "mymachine.example.com"
APP-NAMEsuApp-Name: "su"
PROCID-PROCID unknown
MSGIDID47Message-ID: 47
STRUCTURED-DATA[exampleSDID@32473 iut="3" eventSource=" eventID="1011"]Structured Data Element with a non-IANA controlled
SD-ID of type "exampleSDID@32473", which has three parameters
MSGBOMAn application log entry...BOM indicates UTF-8 encoding, the message itself is "An application log entry..."










The Syslog Server




The Syslog Server


The Syslog Server is also known as the syslog collector or receiver.

Syslog messages are sent from the generating device to the collector. The IP address of the destination syslog server must be configured on the device itself, either by command-line or via a conf file. Once configured, all syslog data will be sent to that server. There is no mechanism within the syslog protocol for a different server to request syslog data.

While most Unix implementations and network vendors, like Cisco, have their own barebones syslog collectors, there are several others available as well.

Paessler’s PRTG monitoring software offers a built-in Syslog Receiver Sensor. The receiver collects all Syslog messages delivered. To use the function, the administrator needs to add the Syslog Receiver and then configure the IP address of that server as the destination server for syslog data on all devices to be monitored.

Once gathered, the dashboard shows:

  • The number of received syslog messages per second.

  • The number of messages categorized as “warning” per second.

  • The number of messages categorized as “error” per second.

  • The number of dropped packets per second.



The syslog protocol can generate a lot of messages. Syslog simply forwards messages as quickly as it generates them. As a result, the most important ability for a syslog server is the ability to properly filter and react to incoming syslog data.

The PRTG Syslog Receiver Sensor offers the ability to set filtering rules. These rules allow syslog messages to be included or excluded as warnings or errors, regardless of how they were originally generated on the device. This filtering ensures that administrators get notified about all the errors they want to know about without being overwhelmed by less important errors.






Syslog Monitoring









Security






The syslog protocol offers no security mechanism. There is no authentication built-in to ensure that messages are coming from the device claiming to be sending them. There is no encryption to conceal what information is being sent to the server. It is particularly susceptible to so-called “playback attacks” where an attacker generates a previous stream of warnings to illicit a response.





Syslog Design






Device Configuration






Most syslog implementations are configurable with respect to which facilities and which severity numbers will generate syslog events that are forwarded to the syslog server. It is important to configure this properly to avoid flooding the server (and the network) with unnecessary traffic. For example, Debug should never be set to send messages except during testing.

It is advisable to set the syslog parameters to require the highest possible (lowest numbered) facility and severity to minimize traffic. While a router error might indicate that an interface is down and thus definitely needs to be reported, a less important network printer might be configured to only generate syslog traffic for critical events.




Windows






Windows systems do not implement syslog within the standard Event Log system. The events generated within the Windows logging system can be gathered and forwarded to a syslog server using third-party utilities. These utilities monitor the Event Log, use the information to create a syslog formatted event, and forward the events using the standard syslog protocol.




Limitations






One major limitation of the syslog protocol is that the device being monitoring must be up and running and connected to the network to generate and send a syslog event. A critical error from the kernel facility may never send an error at all as the system goes offline. In other words, syslog is not a good way to monitor the up and down status of devices.









Syslog Usage





While syslog is not a good way to monitor the status of networked devices, it can be a good way to monitor the overall health of network equipment. While network monitoring software like PRTG offers a suite of utilities to watch over a network, nothing tells an administrator that there is a problem faster than an event log filling up with warnings. Properly configured syslog monitoring will detect the sudden increase in event volume and severity, possibly providing notice before a user-detectable problem occurs.

Security/Authorization/Auditing


The average corporate network contains numerous devices that no one should be trying to gain access to on an average day. If a remote switch that only gets logged into once per audit cycle suddenly has daily login attempts (successful or otherwise), it bears checking out. On these types of devices, syslog can be set to forward authentication events to a syslog server, without the overhead of having to install and configure a full monitoring agent.

Syslog also provides a way to ensure that critical events are logged and stored off the original server. An attacker’s first effort after compromising a system is to cover the tracks left in the log. Events forwarded via syslog will be out of reach.

Application Monitoring


There are plenty of ways to monitor how an application is running on a server. However, those monitors can overlook how the application is affecting other processes on the server. While high CPU or memory utilization is easy enough to detect with other monitors, logged events can help show more possible issues. Is an application continuously trying to access a file that is locked? Is there an attempted database write generating an error? Events like these may go undetected when caused by applications that do a good job of working around errors, but they shouldn’t be ignored. Syslog will make sure those logged events get the attention they deserve.

Syslog as Part of Overall Network Monitoring


Complete network monitoring requires using multiple tools. Syslog is an important pillar in network monitoring because it ensures that events occurring without a dramatic effect do not fall through the cracks. Best practice is to use a software that combines all the tools to always have an overview of what is happening in the network.


Wednesday, November 14, 2018

What is Firewall - Its importance and types


A firewall is a system designed to prevent unauthorized access to or from a private network. You can implement a firewall in either hardware or software form, or a combination of both. Firewalls prevent unauthorized internet users from accessing private networks connected to the internet, especially intranets. All messages entering or leaving the intranet (i.e., the local network to which you are connected) must pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.

Note: In protecting private information, a firewall is considered a first line of defense; it cannot, however, be considered the only such line. Firewalls are generally designed to protect network traffic and connections, and therefore do not attempt to authenticate individual users when determining who can access a particular computer or network.

Several types of firewalls exist:

  • Packet filtering: The system examines each packet entering or leaving the network and accepts or rejects it based on user-defined rules. Packet filtering is fairly effective and transparent to users, but it is difficult to configure. In addition, it is susceptible to IP spoofing.

  • Circuit-level gateway implementation: This process applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking.

  • Acting as a proxy server: A proxy server is a type of gateway that hides the true network address of the computer(s) connecting through it. A proxy server connects to the internet, makes the requests for pages, connections to servers, etc., and receives the data on behalf of the computer(s) behind it. The firewall capabilities lie in the fact that a proxy can be configured to allow only certain types of traffic to pass (e.g., HTTP files, or web pages). A proxy server has the potential drawback of slowing network performance, since it has to actively analyze and manipulate traffic passing through it.

  • Web application firewall: A web application firewall is a hardware appliance, server plug-in, or some other software filter that applies a set of rules to a HTTP conversation. Such rules are generally customized to the application so that many attacks can be identified and blocked.


In practice, many firewalls use two or more of these techniques in concert.

In Windows and Mac OS X, firewalls are built into the operating system.

To make use of Firewall , we implement policies>

There are mainly two zones

Trust & Un-Trust

By default > traffic from trust to un-trust is allowed

Untrust to trust all traffic are denied , until we implement the policies .

 

Sunday, May 20, 2018

Q-Radar (SIEM) || Architecture || Basic understanding || Tutorial


Q radar (Security Information and Event Management)


IBM® Security QRadar® SIEM is a network security management platform that provides situational awareness and compliance support. QRadar SIEM uses a combination of flow-based network knowledge, security event correlation, and asset-based vulnerability assessment.
QRadar SIEM provides extensive visibility and actionable insight to help protect networks and IT assets from a wide range of advanced threats. It helps detect and re-mediate breaches faster, address compliance, and improve the efficiency of security operations.

To get started, configure a basic Q Radar SIEM installation, collect event and flow data, and generate reports.

Basic Architecture of Q radar SIEM:



1) Log sources >>


We have third party log sources that will send data to Q Radar for collection , storage , parsing and processing . We can configure Q Radar to accept logs . A log source is a data source from which we log event is created .If a log source is not automatically discovered, you can manually add log source to receive events from your network devices , applications , anything and everything.

If there are devices which are very specific or custom build by customer (Devices which are not easy to integrate like juniper , Forty-gate etc ).  To get logs from these kind of devices we need Universal DSM. For these kind of devices , sometimes we need to write regex to parse the logs.

Log sources are configured to receive events from different log sources protocols like ( SNMP, SYSLOGS, JDBC "Java Database Connectivity" , OPSEC "Open Platform for security")

DSM guide will give us exact information to configure log sources.

2) License Filter (License Throttle)>>




Events are received by the Event collector and the first filter applied on that is License filter. License filter monitor the number of events entering into system.

It only allows number of events mentioned in EPS (Events Per Second). Suppose your License is of capturing only 5000 Events per Second and 7000 events arrived at the event collector than 5000 events are processed and remaining 2000 remain at the buffer.

Note :  Each Events are counted against the license before  Events are coalesced ( Remove duplicate from events )

 

What happens to the events in Buffer?


When the system goes over its license limit , Burst handling seamlessly start moving events or flow data to a temporary Queue in an attempt to prevent any drop of event . Also a notification  is sent to the system administrator informing that the license limit exceed .

As of Q radar 7.2.4 , License limit of Buffer is 5 GB per queue [ eg. syslog, JDBC etc] .Also FIFO method is followed for transfer of events .

The rate at which temporary queue fills and empties depend upon limit of license filter , Magnitude of spike and payload size and other factors .

 

3) Event Parsing (DSM Parsing)>>


This is a parser that convert RAW events logs from different log sources into human readable Record .

The events are normalized here . Normalization here means the extraction of properties which we use in Q Radar or any custom properties ( that are marked as optimized )

Properties that are include in the Normalization are :

Event ID , Source IP, Destination IP , Source Port , Destination Port , Protocol , Pre- Nat IP/Port, Post-Nat IP/ Port, source or destination mac , machine name etc.

We can also add custom properties .

 What happens if the existing Parser (DSM) is not able to parse the events ?



  • There are two scenarios that would happen when the DSM parser is not able to recognize the events from a particular Log source .

  • For both the cases , The events would show up under the “Un-parsed” filter

  • Any event which shows up as anyone of them might not trigger the rules that should have been triggered since Q Radar is not able to recognize them . they are :

    1. Events are reported as “Stored” under the log activity tab

    2. Events are reported as UNKNOWN under the log activity tab




 

When the event shows up as “Stored ”..


This mean that the parsing logic for the associated Log source from which these events are coming is not able to parse out anything from the incoming event.

In other words, the DSM parsing logic is failing for such an event

If you had created the log source manually for this , Recheck the log source configuration Log Source Type and correct it if required

If the Log source was created correctly or the Log source is auto created , then open up a support ticket with IBM

The most probable cause for this is a new event format which we haven’t seen previously . In such a case , IBM would release a new DSM for that Log Source through weekly auto updates to get this fixed .

 

When the event shows up as UNKNOWN


The event viewer will show events in the “unknown “category when the event name parsed in the event message does not does not match any of the known mapping between device events names and Q Radar Q ID’s and low level categories . This means that even though the DSM is able to parse out the different parameters from the events , the event name (Parsed from the event payload ) does not match any of the existing Q ID’s

 

This most commonly occurs when using DSM Extensions , which by design will never automatically be mapped to known categories . In these cases , you need to map all the event names parsed out of message from your device to Q radar known categories .

The second scenario where this can occur is when a supported device has newly added message types , Which Q Radar is not aware of . While IBM works to keep these mapping up to date by means of Auto process , we might occasionally still see these messages .

 

We have the option of going ahead with mapping the vents yourself (which will not be overwritten by the auto update process later if they are added ) , waiting for an update to see if they are then mapped , or logging an issue with IBM Support .

 

Note that if security team is adding custom event names to supported device , event names that do not come from the third party vendors itself , IBM will not aware of these , and you should go ahead and  map these yourself . This is common with Snort , as many customer will add their own signatures and messages .

To remap UNKNOWN events , open the event viewer , and click the “Map Event”

Button at the top of the log activity screen . If the system is able to parse out unique name , you should see this in the “Device event ID ” or “Log Source Event ID” (Depending upon Q Radar Version )

 

4) Coalescing Filter >>


Events are parsed and then sent to the Coalescing Filter . You can select to enable /Disable Coalescing while creating the Log source . Auto – detected sources have their Coalescing ON. You can edit such a Log source and disable this .

Coalescing mean once 4 events are seen with the same source IP , destination IP , destination Port , username and event type , subsequent message pattern for up to 10seconds of the same pattern are coalesced together and reported as one with the event count representing actual number of such coalesced events .

This is done to reduce duplicate data stored in the background of DB .

 

Coalescing would not affect event counted against license . License filter comes before coalescing in the event processing pipeline .

5) CRE- Rules Processor (magistrate) >>


The Custom Rule Engine (CRE) is responsible for processing events received by Q Radar and comparing them against defined rules , keeping track of system involved in the incident over time , generating notification to users and generating offenses .

 

The Q Radar custom rule engine (CRE) runs with ECS, in the “event processor”. The CRE runs on each managed host (16XX,17XX,18XX) and the console (31xx,2100) and monitors data as it comes through the pipeline .

 

When a single event matches a rule , the rule /response section of the rule is executed , generating a new message , emails , syslog messages , offenses etc. as configured . Events that match rules are tagged with the rule and written to the storage , so that you can search for events matching that rule later .

 

Rules – what are they?


Rules , also sometimes called as Correlation Rules is one of the most important factors which make Q Radar intelligent . Rules perform tests on events , flows or offenses , and if they all the conditions of a test are met , the rule generate a response which can be in the form of alerts . Rules can also be behavioral in nature too.

By default , there are hundreds of different types of rules that are shipped with Q radar . Most of the attacks like Dos , DDoS , exploit rules are already present in the Q Radar .

 

In Q Radar , Rules can also generate offenses . Offenses are Security Incident that need attention

7) Ariel storage >>


 

A time series database for events and flows where data is stored on a minutes by minute basis . Ariel DB us a flat file pre-indexed propriety Database of Q Rdar . The structure of this DB is what makes Q Radar searches fast . Data is stored where the event is processed Remember , that both Consoles , 16XX and 18XX can all process events .

 

As events comes into your appliance , they are processed by ECS and stored locally on the appliance during the storage phase of ECS .

  1. Event like system notification etc received by a Console appliance are stored in the Console’s Ariel database .

  2. Events received by an EC , EP or EP/FP appliance are stored in the appliance local Ariel database


 

Traffic Analysis>>


Traffic Analysis , also known as Auto Detection , allows Q Radar to auto detect and create new log sources based on incoming data stream

 

When Q Radar starts receiving data , it sends that data over to traffic analysis engine for auto detection after running them through DSM Parser .

In the data coming is in the form of unrecognized / Unsupported device , Q radar will likely fail auto-detection . Event from that log source will show as UNKNOWN/Stored on the UI

 

Create Log source manually>>


For Few DSMs we need to create the Log source manually since we do not auto –discover them . You will get a system notification saying that auto –discovery  could not auto-discover the log source in such cases .

 

DSM Guide has information on which Log Sources are auto-discovered and which need manual log source creation .

 

Offsite Target >>


Q Radar has the ability to forward processed , parsed events to another Q Radar deployment . This is typically used in “Disaster Recovery “ (DR) deployment , where customers want to have a second console /Installation that has a backup copy of production data and it acts as a DR setup.

Event Streaming >>

Responsible for sending real time events data to the console when a user is viewing events from the log source activity tab with real time (Streaming )

 

Real time streamed events are not picked up from the DB but shown in real time after they are passed through the CRE.

 

Only when you do historic searched are events picked up from the Ariel DB