Add notes from some splunk virtual classes
This commit is contained in:
parent
a1f934e0e7
commit
554b784420
58
ghetto/notes/SplunkDataAdministration/day1.txt
Normal file
58
ghetto/notes/SplunkDataAdministration/day1.txt
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
Teacher: joanna@splunk.com
|
||||||
|
|
||||||
|
My Intro:
|
||||||
|
I'm Orien (sounds like the constellation Orion), just started as a Splunk
|
||||||
|
Engineer at Defense Point Security. I've finished a couple Splunk classes
|
||||||
|
over the last month, nothing practical yet, Linux for 20+ years. Dogs.
|
||||||
|
|
||||||
|
Goals:
|
||||||
|
Manage and deploy forwarders with management (Module 4&5, critically important)
|
||||||
|
configure common splunk data inputs
|
||||||
|
customize input parsing process
|
||||||
|
- Not covering creating splunk indexes
|
||||||
|
|
||||||
|
Schedule 1-4 today, 4-7 tomorrow, 8-12 Friday
|
||||||
|
|
||||||
|
Module 1: Introduction
|
||||||
|
Input > Parsing > Indexing > Searching
|
||||||
|
Primary Components: Forwarder, Indexer, Search Head
|
||||||
|
Additional: Heavy Forwarder, Deployment Server
|
||||||
|
|
||||||
|
Splunk Data Administrator Role
|
||||||
|
data onboarding and management
|
||||||
|
work with users requesting new data, define events and fields for ingest
|
||||||
|
prioritize requests
|
||||||
|
document everything
|
||||||
|
design and manage inputs for UF/HF to capture data
|
||||||
|
manage parsing, line breaking, timestamp extraction
|
||||||
|
move from testing to production
|
||||||
|
Lab 1:
|
||||||
|
Path: /opt/splunk
|
||||||
|
|
||||||
|
Module 2: Getting Data in - Staging
|
||||||
|
Input phase - broad strokes only
|
||||||
|
most configuration in input.conf
|
||||||
|
some configuration occurs in props.conf
|
||||||
|
Parsing phase - fine tuned tweaks
|
||||||
|
most configuration in props.conf
|
||||||
|
also uses transforms.conf
|
||||||
|
_thefishbucket contains file monitoring audit information
|
||||||
|
custom indexes control access, improve performance and control retention time for each index individually.
|
||||||
|
Index-Time precedence, local/default file processing under apps occurs in ascii sort order
|
||||||
|
splunk btool <conf-name> list <options>
|
||||||
|
options: --debug --user=<user> --app=<app>
|
||||||
|
examples: splunk bool inputs list monitor:///var/log/secure.log --debug
|
||||||
|
--debug shows the config files that created the settings.
|
||||||
|
|
||||||
|
Module 3: getting Data in - Productin
|
||||||
|
Universal Forwarder bandwidth limited to 256KBps by default
|
||||||
|
UF only forward to splunk instances, and only 1 at a time
|
||||||
|
HF can forward to other products, and more than 1 at a time
|
||||||
|
HF can be used as a mid stage forwarder for multi-tier forwarding setups.
|
||||||
|
HF no longer best practice
|
||||||
|
|
||||||
|
Module 4: Deployment Server
|
||||||
|
Server classes have one or more apps
|
||||||
|
A server has one or more classes
|
||||||
|
so a server gets apps via the classes it belongs to
|
||||||
|
|
70
ghetto/notes/SplunkDataAdministration/day2.txt
Normal file
70
ghetto/notes/SplunkDataAdministration/day2.txt
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
Module 5 - Monitor Inputs
|
||||||
|
Question: How does splunk handle file rotations if it happens during a restart? Data lost?
|
||||||
|
Answer: Directory Monitors do, File Monitors don't
|
||||||
|
slide 112
|
||||||
|
splunk cmd btprobe -d SPLUNK_HOME/var/lib/splunk/fishbucket/splunk_private_db --file <source> --reset
|
||||||
|
|
||||||
|
tcp/udp default source name: <host>:<port>
|
||||||
|
|
||||||
|
scripted input:
|
||||||
|
* $splunk_home/etc/apps/<app_name>/bin # This is the best place for it.
|
||||||
|
* $splunk_home/bin/scripts
|
||||||
|
* $splunk_home/etc/system/bin
|
||||||
|
test script: ./splunk cmd <path>/script.sh # doesn't run script, just tests that splunk can access it.
|
||||||
|
scripted inputs can also buffer data, similar to the network collectors
|
||||||
|
Better to have cron run the script, and dump the data to a logfile. Make splunk monitor the logfile instead
|
||||||
|
|
||||||
|
Module 7: windows & Agentless
|
||||||
|
Windows
|
||||||
|
input types: admon perfmon WinEventLog WinHostMon WinPrintMon WinRegMon
|
||||||
|
Warning from fellow student:
|
||||||
|
Just throwing this out there. If you monitor the registry in a way that
|
||||||
|
causes the Universal Forwarders to send you their entire registry you
|
||||||
|
are likely to clog WAN links. I saw a 16 Gbps WAN link go down because
|
||||||
|
of this went thousands of Windows systems were sending over their
|
||||||
|
registry.
|
||||||
|
|
||||||
|
[WinEventLog://Security]
|
||||||
|
whitelist1= "Stuff"
|
||||||
|
whitelist2= "Other stuff"
|
||||||
|
blacklist
|
||||||
|
Maximum of 10 whitelists and blacklists per universal forwarder stanza
|
||||||
|
Can do WMI remote inputs, not recommended for environments bigger than small, scales poorly, requires an AD account
|
||||||
|
Special field extractions
|
||||||
|
IIS: frequently reconfigured on the fly by admins. OBvs this is a problem.
|
||||||
|
Use indexed field extraction on the windows forwarder to correct this.
|
||||||
|
Ensure that the header is in the same place and never moves. Then the forwarder can use that header to pre-parse the data.
|
||||||
|
Powershell input, otherwise teh same as the scripted input, still better to have windows schedule it instead
|
||||||
|
Agentless
|
||||||
|
Splunk App for Stream
|
||||||
|
essentially a packet capture agent
|
||||||
|
monitors the network and collects it's data there, then sends it into splunk
|
||||||
|
HTTP Event Collector
|
||||||
|
Splunk listens for http inputs, clients send their data to the http listener.
|
||||||
|
Distributed HEC (HTTP Event Collector) Deployment Options
|
||||||
|
Can scale because every splunk system can act as a collector to receive data from a load balancer
|
||||||
|
Disabled by default Settings > Data Inputs > HTTP Inputs
|
||||||
|
Create a token, then define metadata for the stream
|
||||||
|
Data can be transmitted as JSON
|
||||||
|
Can send acks, but requires additional handshaking for the response channel
|
||||||
|
Multi-event JSON posts are possible, but in non-standard format: { stuff }{ stuff 2 }{ stuff 3} rather than standard [{}{}{}]
|
||||||
|
My Token: 3372606C-6D24-48A4-A28D-09C616A277E7
|
||||||
|
|
||||||
|
Module 8: Fine-Tuning Inputs
|
||||||
|
props.conf is very important
|
||||||
|
inputs phase:
|
||||||
|
character encoding (default is utf8)
|
||||||
|
fine tuned source types
|
||||||
|
can override the defaults on a per file basis
|
||||||
|
|
||||||
|
parsing:
|
||||||
|
event breaks
|
||||||
|
time extraction
|
||||||
|
event transformation
|
||||||
|
|
||||||
|
Module 9: Parsing Phase and Data Preview
|
||||||
|
props.conf.spec - LINE_BREAKER is best way to split lines, ProServ recommended
|
||||||
|
Take extra time to ensure timestamps are correct
|
||||||
|
Either TZ in timestamp, or specified in props.conf or tz of indexer
|
||||||
|
|
||||||
|
|
33
ghetto/notes/SplunkDataAdministration/day3.txt
Normal file
33
ghetto/notes/SplunkDataAdministration/day3.txt
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
Module 10:
|
||||||
|
Modifying raw data before it's indexed
|
||||||
|
use per event source types only in a last chance scenario, everything else is better
|
||||||
|
to set metadata in transforms.conf
|
||||||
|
SOURCE_KEY = _raw
|
||||||
|
REGEX = server:(\w+)
|
||||||
|
DEST_KEY = MetaData:Host
|
||||||
|
FORMAT = host::$1
|
||||||
|
Host => host::,
|
||||||
|
|
||||||
|
To change the index at index-time (note the additional underscore here)
|
||||||
|
REGEX = (Error|Warning)
|
||||||
|
DEST_KEY = _MetaData:Index
|
||||||
|
FORMAT = itops
|
||||||
|
|
||||||
|
Filter Events
|
||||||
|
FORMAT = nullQueue
|
||||||
|
|
||||||
|
http://<splunk>/debug/refresh - forces splunk to refresh it's config(?)
|
||||||
|
at a minimum it does the inputs configurations, definitely doesn't do the indexer
|
||||||
|
|
||||||
|
|
||||||
|
I need to go over modules 10 and 11. Missed too much i fear
|
||||||
|
|
||||||
|
Module 12: Diag
|
||||||
|
Creates diagnostic package for shipment to experts.
|
||||||
|
./splunk diag
|
||||||
|
Create and index a diag
|
||||||
|
|
||||||
|
|
||||||
|
Course Review:
|
||||||
|
Mod 1 -
|
||||||
|
joanna@splunk.com
|
74
ghetto/notes/SplunkSystemAdministration/day1.txt
Normal file
74
ghetto/notes/SplunkSystemAdministration/day1.txt
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
Mitch Fleischman
|
||||||
|
mitchf@splunk.com
|
||||||
|
|
||||||
|
studentid: 06
|
||||||
|
ipaddress: 52.53.200.165 10.0.0.206
|
||||||
|
ssh username: btv_splunker06
|
||||||
|
|
||||||
|
set servername and hostname to splunk06
|
||||||
|
also set sessiontimeout, to something helpful for class
|
||||||
|
|
||||||
|
Modules 1-6.5 today, 6.5-11 tomorrow
|
||||||
|
|
||||||
|
When do you add more indexers?
|
||||||
|
Partly based on how much searching, but add a new indexer every 100 = 250GB daily
|
||||||
|
with Enterprise Security, you'll trend closer to the lower number (aka more indexers)
|
||||||
|
|
||||||
|
Search Heads?
|
||||||
|
8-12 users per search head
|
||||||
|
user might mean scheduled searches, etc
|
||||||
|
|
||||||
|
hardware
|
||||||
|
12G ram
|
||||||
|
indexer: 12@2Ghz 800iops
|
||||||
|
search: 16@2Ghz 2x10k SAS RAID1
|
||||||
|
|
||||||
|
splunk kv store is mongodb
|
||||||
|
|
||||||
|
Linux OS tuning: pg 20
|
||||||
|
ulimit -c 1073741824
|
||||||
|
ulimit -n 48 x default
|
||||||
|
ulimit -u 12 x default
|
||||||
|
|
||||||
|
disable THP
|
||||||
|
|
||||||
|
change root password, insert sha256 checksum (I believe) into $SPLUNK_HOME/etc/passwd to change admin password
|
||||||
|
|
||||||
|
./splunk enable boot-start -user <username>
|
||||||
|
|
||||||
|
|
||||||
|
Windows
|
||||||
|
Autostarts automatically
|
||||||
|
|
||||||
|
$SPLUNK_DB = $SPLUNK_HOME/var/lib/splunk
|
||||||
|
|
||||||
|
Licensing:
|
||||||
|
3 warnings for free splunk, 5 for paid
|
||||||
|
30 day rolling window
|
||||||
|
|
||||||
|
Module 3: Installing Apps
|
||||||
|
App is collection of files (inputs, indexes, sourcetypes, extractions, transformations), (eventtypes, tags, reports, dashboards, other KOs), (Scripts, web assets)
|
||||||
|
Addon is an App subset (like the bits needed to make a forwarder work)
|
||||||
|
Remove an app:
|
||||||
|
splunk remove app <app_folder>
|
||||||
|
Permissions:
|
||||||
|
read - to see and interact with it
|
||||||
|
write - to add delete modify the KO in the app
|
||||||
|
Default is read only
|
||||||
|
|
||||||
|
Module 4: Configuration files
|
||||||
|
*/default - comes with splunk
|
||||||
|
*/local - user overrides
|
||||||
|
.meta files determine how global a configuration file setting is.
|
||||||
|
app/metadata/local.meta
|
||||||
|
[tags/action%3Daddtocart/browser]
|
||||||
|
access = read : [ * ]
|
||||||
|
export = (none|system)
|
||||||
|
owner
|
||||||
|
version
|
||||||
|
modtime
|
||||||
|
|
||||||
|
splunk btool check
|
||||||
|
splunk btool (inputs|) list (|monitor:///var/log{, --debug}) # debug shows which file the line came from
|
||||||
|
splunk btool tags list (list all tags configured) --debug (also show the file they came from)
|
||||||
|
splunk btool tags list --debug --app=search --user=<username>
|
55
ghetto/notes/SplunkSystemAdministration/day2.txt
Normal file
55
ghetto/notes/SplunkSystemAdministration/day2.txt
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
Module 5 - Buckets and indexes (lots of data from DataAdmin class)
|
||||||
|
Show data utilization for an index with details
|
||||||
|
|dbinspect index=<index>
|
||||||
|
Can set the search default window per app
|
||||||
|
ui-prefs.conf
|
||||||
|
[search]
|
||||||
|
dispatch.earliest_time = -24h@h
|
||||||
|
dispatch.latest_time = now
|
||||||
|
|
||||||
|
Module 6 - Splunk Index Management
|
||||||
|
Recommend
|
||||||
|
rolling hot buckets daily,
|
||||||
|
maxHotBuckets - limit of 10 hot buckets for a high volume index (default 3)
|
||||||
|
frozenTimePeriodInSecs - how long to wait before freezing buckets
|
||||||
|
index.conf
|
||||||
|
[volume:fast]
|
||||||
|
path = <>
|
||||||
|
maxVolumeDataSizeMB = <size>
|
||||||
|
|
||||||
|
[soc]
|
||||||
|
homePath = volume:fast/soc/db # homePath is hot and warm buckets
|
||||||
|
homePath.maxDataSizeMB = <size>
|
||||||
|
coldPath # Same thing for cold
|
||||||
|
|
||||||
|
Backups
|
||||||
|
$SPLUNK_HOME/var/lib/splunk // indexes
|
||||||
|
$SPLUNK_HOME/etc // configs
|
||||||
|
|
||||||
|
Hot buckets cannot be backed up without stopping splunk, or using snapshots
|
||||||
|
Alternatively, forhigh volume, multiple daily incremental backups to grab data frequently
|
||||||
|
Moving an index
|
||||||
|
stop splunk
|
||||||
|
then move the directories
|
||||||
|
then update indexes.conf to point at the new locations
|
||||||
|
if a global move, update SPLUNK_DB environment variable
|
||||||
|
Removing data
|
||||||
|
wait for expiration
|
||||||
|
delete command marks as deleted, doesn't free space need to the the special can_delete role for that
|
||||||
|
Search> search for some records | delete
|
||||||
|
> splunk clean [eventdata|userdata|all] [-index name]
|
||||||
|
Actually removes the data from the index entirely, frees space
|
||||||
|
if no index is provided, deletes all the data!
|
||||||
|
Restoring Data from frozen
|
||||||
|
only raw data is frozen, no indexes
|
||||||
|
copy archive directory into the index specific thaweddb directory
|
||||||
|
The rebuild the index for that data, doesn't recharge for licensing
|
||||||
|
> splunk rebuild <path to thawed bucket directory>
|
||||||
|
|
||||||
|
Index replication
|
||||||
|
|
||||||
|
Module 8: Authentication Integration
|
||||||
|
LDAP, PAM, RADIUS, AD, etc
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user