We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enrich your life today, yesterday is history, tomorrow is mystery.
SQL
IOS APP
GET
Insert
DNS
Java
Burpsuite
session
Bash Shellshock
Bash
FortiGate
SSH
Firebug
Python
urllib2.Request()
urllib2.urlopen()
对象.read()
BeautifulSoup()
txt
file.write(str)
encode()
ASCII
Unicode
utf-8
py
coding=utf-8
decode
encode
Linux
utf
Windows
gbk
<meta charset="UTF-8">
Pycharm
UTF-8
BeautifulSoup
.string
NavigableString
BeatuifulSoup
for
sys
argparse
str.replace(a, b)
b
str
a
import re #制定替换规则,生成正则pattren a = re.compile('[/\\\?\*<>]') #利用正则表达式的sub函数,将str中需要替换的符号替换成c b = a.sub(c, str)
# 过程源码:整合两个读取网页的过程,并且保存相应POC到txt文件 request_public = urllib2.Request(url_public) # 伪装浏览器,添加User-Agent request_public.add_header('User-Agent', user_agent) response_public = urllib2.urlopen(request_public) soup_public = BeautifulSoup(response_public.read(), 'html.parser') full_text = soup_public.find_all(href=re.compile(r'poc')) for each_public in full_text: print each_public.string # poc_list.append(each_public.string) url_vul = url_index + each_public.attrs['href'] # vul_list.append(url_vul) request_vul = urllib2.Request(url_vul) request_vul.add_header('User-Agent', user_agent) response_vul = urllib2.urlopen(request_vul) soup_vul = BeautifulSoup(response_vul.read(), 'html.parser') # 分析源码 # print soup_vul # vul_text = soup_vul.find_all('pre') # 仅输出代码 vul_text = soup_vul.find_all('pre', class_="brush: python;") for each_vul in vul_text: print each_vul.string # 去除文件名中的'/',文件名不支持 save_name = each_public.string.replace('/', '') + '.txt' # soup_vul是utf-8编码,beautifulsoup默认输出格式是utf-8 # print soup_vul.original_encoding with open(save_name, 'a+') as f: # each_vul.string是一个NavigableString对象(可以认为是str的继承子类) # 和Unicode字符串相同,用unicode()可以转化成Unicode字符串,unicode编码 # 输出到txt时候,需要编码成utf-8格式 f.write(each_vul.string.encode('utf-8')) # f.write('\r\n')
The text was updated successfully, but these errors were encountered:
No branches or pull requests
0x01 Wooyun
SQL
注入IOS APP
移动端的GET
注入Insert
注入DNS
域传送漏洞Java
反序列化命令执行Burpsuite
抓包截断Burpsuite
选取一些常用段,跑后4位就行Burpsuite
暴力破解session
絮乱导致,先正常输入自己的手机号,获取验证码,点击下一步,停留在修改新密码页面,开另外一个窗口,输入目标手机,发送验证码,返回之前的修改密码出,修改密码,就成功修改了目标的密码Bash Shellshock
(Bash
远程代码执行)FortiGate
防火墙存在SSH
后门0x02 爬虫实战一
Firebug
查看网络Python
是利用urllib2.Request()
发送一个请求,通过urllib2.urlopen()
获得一个类文件对象,将这个对象.read()
读取(或者传入BeautifulSoup()
中读取)获得源码后,就可以根据这个源码进行分析Firebug
的话,就需要查看网络,查看其中请求包返回的消息体,也就等同于上面对象.read()
查看源码txt
文本时候遇到的编码问题file.write(str)
,保存成txt
文件,encode()
ASCII
、Unicode
是字符集,utf-8
是字符集的编码方式utf-8
是Unicode
字符集的一种编码方式py
文件的编码方式,程序默认按照ASCII
字符集来解码,所以需要声明文件编码方式ASCII
字符的时候首先需要在文件头声明编码方式coding=utf-8
decode
)为Unicode
字符后,再统一编码(encode
)后输出Linux
默认是utf
编码,Windows
默认是gbk
编码,网页也有自己的编码方式,Unicode
作为中间编码,服务器先把网页解码成Unicode
,然后编码成系统的编码输出到浏览器,很多网页源码上就有类似的提示信息<meta charset="UTF-8">
Pycharm
中控制台输出如果是UTF-8
,也可以正常显示BeautifulSoup
默认输出到控制台都是按照UTF-8
格式(也就是BeautifulSoup()返回的是一个utf-8
格式的文档对象),但是.string
需要输出到txt文档的时候,其实.string
是NavigableString
对象(可以认为是str的继承子类),是Unicode
编码格式(字符串在Python
中都是Unicode
编码格式),.string
输出到控制台时,BeatuifulSoup
会自动将其转换成UTF-8
格式for
循环的逻辑for
循环找到一个POC名字和链接页,进入这个POC详细页,利用第二个for
循环来输出具体内容,并保存sys
模块:包含了相关的,获得命令参数的功能argparse
模块:命令行选项和参数解析的模块str.replace(a, b)
用b
替换str
中的a
0x03 一天总结
The text was updated successfully, but these errors were encountered: