前回に引き続きThis is My Architecture3ヶ月チャレンジの2回目。
今回はAncestry: Building a Real-time and On-demand Logging System on AWSを聴いていく。
目的
限られた時間で一気に以下をやりたい。
- 英語のDictationとShadowing(リスニングとスピーキングUP)
- AWSサービスの勉強と実例
- インプットとアウトプット
まずは1回聞いてみる
Jimさんfrom アンセストリー。family history, DNA company. lots of data of connection.
real time ondemand logging system.
thoudthansa number of EC2 running, fargarte sending many logs to cloud watch . send them to Kinesis.
firehorse? directory to S3?
directory to S3 first, archive ID, category, ordering.
Developer needs access log , but not neede all the time.
eventually
developper canopen particular time frame to log.
by Lambda send this frame to Kinesis.
developperとしてバグやエラーを探したい。ここれはできるのか?考慮されているの?
Yes, Lambdaでやっている。
SAX PCI, HIPPAといったregulatoruにも対応しないといけない。on demandの時に取り出せるよううにしている。
要約
(今回は前回と比べて比較的聞きやすかったので内容が頭にスッと入ってきたので、要約してみる。これも立派なアウトプット。この内容はあくまで1回流しで聴いてみて要約した内容なので合っているかどうかはこの時点ではわからない。)
アンセストリーはファミリーヒストリー(DNA)データをリアルタイムでロギングしているシステムを構築した。
そのシステムでは数千ものEC2インスタンスがFargateで使用されていて、それらのログは全てCloud Watch Logsで収集されている。そのログがKinesis->S3へ送られ、S3ではアプリケーション毎に時系列で整理し直されている。
開発者はエラーやバグの調査のため、調査対象のタイムフレームを指定するだけで該当ログを取得できる。これによりコスト効率を上げている。SAX, PCI, HIPPA等の規制にも対応している。
Dictation
Hello and welcome to This is My Architecture, I am Heitor Lessa and I have Jim with me from Ancestry. Hello Jim thank you for joining us, can you tell us about Ancestry.
Certainly, Ancestry is a family history company, and most recently DNA company and our goal is to bring deep meaningfull connection to our customers.
Deep connection of lots of data it should make those connections, right?
That's right. That(s what we are doing right here.
yes
perfect I can see your doing some logging on demand in a real-time.
can you walk through this architecture?
certaily , we have a real-time and on-demand logging system here that we can collect from across our organization so we have thousands of EC2 instances running we fave Fargate instances running we have managed services that are sending logs in the Clowd watch. and we pull lthose logs and we push them here into our Kinesis stream for processing right here, and so within the Kinesis stream we then can push them directly to S3.
is that Kinesis firehose sendin straight to S3? or you do some pre-processing before that?
So that's what these Lambdas are for, withih here we process the logs as they come across the Kinesis stream and the first thing we do is we push them directry in S3 and in the S3 we organize them according to Stack Id or application Id, according to date time things like that.
so do all this categorization and ordering is that what the historical loader is gonna be used for?
right so if we look down at our developper, the developer needs access to the logs right but they done need access to logs all the time ,
logs are flowing in the production and eventually they gonna want to look at their logs and alert will happen or something like that. so the idea is that a developer can open a time window and can say I need to look at the last twoe hours worths of logs and I need to look at logs for the next hours too.
and so we take that we hit our gatway and one of our Lambdas starts pulling real-time logs out of here and into our Kinesis stream, but if I want to look at historical logs I pulled them out of S3 because I had them organized by the application and by date and time so we knwo exactly when that log was produced and we also start moving those into here into our Kinesis and ElasticSearch.
Okay from the developer perspective, apart from the time and time range, do they define something else like usually I would normally want if Im a developer, and I would like to do give me the logs I have the bugs or errors or warning. is that log levels also included in the solutions?
yes so as a process across the kinessis stream, our kambda can then analyze the logs we know what different log leveles coming through ewe know how many are coming through and we can then inform the developer wehn an error rate goes up or and we are seeeing more error level log.
Right and that's all being done through these Lambdas here the metrics and all this...
That's right
Because you have lots of data how do you keep this data because you, also I presume you also have regulations.
Thats right we have to follow Sox, PCI, and HIPPA compliance and so withing that we haet to keep logs for a certain amont of time so in S3 we have plicies in there to move them in to Glacier and then it eventulaly just delete them because again most of them are not looked at until you nee do to see the law.
has it on demand loggin which is perfect solution.
and most customer when they have this solution that look like this they would probably keep the logs and to the elastisearch index for 30 days or so Idont think yo do thin on your system, do you?
No we keep the logs in there for 48 hours and why because I can ask them again so lets say I had an incident last week between 12 and 5 oclok, I can go to the API say pull me the logs for this Stack ID at during that time range and it wull bring them in index them let them look at it they will be gone within 48 hours .
they can just ask for them again
massive economics on that ..
massive economics we dont have to have near as big of elasticsearch a clustre
that's awesome
so let me see my undersit bit everything here
you got thousands of machines running in Ec2, fargate and possibly could be neede
from what I ve heard, all those logs go to cloud watch first
they get passed to kinesis you do ascended to S3 reorder those log so it makes it easier to acutrallyr catother them after wars and thistorical loade is when you actually get those data and you look for appication ID and log level or date time range you provide a log for this developer wherever they need those logs on demand. it is not something that they need all the time as we all know right
that's right
and you actually kill or desrtory all these enviroemnt for cost saving s and even better even if they want for longer you actually only keeo that for 48 hours that right, perfect that's awesome/
thank you for showing that to us Jim
it is pretty cool architecture.
Pretty cool solution thank you
than you for watching this is my architecture.
やっとShadowing
眠くて舌が回らない。。。
最初に聞いた後の要約、間違ってはいないけど情報が欠落してる。まだまだ英語->日本語変換に脳のリソース使ってるから内容を記憶できていないんだなー。
結論
前回の英語より遅い&発音がはっきりしていたので比較的よく聞き取れたと思う。
でもやっぱりまだまだ辛い。。