Skip to content

Weekly meeting 2015 12 09

Yangjun Wang edited this page Dec 11, 2015 · 2 revisions

####Time: 3:00pm to 4:30pm, 2015/12/09

Address:

A351, CS building

Works during last week:

  1. Better tests(compare with mysql) on flink join operators(both windowed join and nowindowed join);
  2. Decided using logging statistic performance(latency and throughput);
  3. Implement statistic logging in Flink and Storm;
  4. Created jira in Flink project and corresponding solution project;

Discusses and decisions

  1. Skip spark offline test;
  2. Spark throughput(all elements/interval time);
  3. Through in beginning and ending of workload;
  4. Find better Zipf generator;

Following work

  1. Implement Spark performance logging;
  2. Implement performance logging statistic(hint:window percentage);
  3. Implement first version test with whole test flow;

It seems that in most API of spark streaming it is impossible to know that which record is the last one of current mini-batch.