From ae6b146c094d36d632ee089b65845b87dafb514b Mon Sep 17 00:00:00 2001 From: vinhn Date: Thu, 28 Apr 2022 18:38:53 -0700 Subject: [PATCH 1/4] adding cpp example for saved model --- README.md | 0 .../TF-TRT_CPP_inference_overview.png | Bin 0 -> 58378 bytes .../{ => frozen-graph}/BUILD | 0 .../frozen-graph/README.md | 106 ++++++++++++++++++ .../TF-TRT_CPP_inference.png | Bin .../{ => frozen-graph}/main.cc | 0 .../{ => frozen-graph}/tftrt-build.sh | 0 .../{ => frozen-graph}/tftrt-conversion.ipynb | 0 8 files changed, 106 insertions(+) mode change 100644 => 100755 README.md create mode 100644 tftrt/examples-cpp/image_classification/TF-TRT_CPP_inference_overview.png rename tftrt/examples-cpp/image_classification/{ => frozen-graph}/BUILD (100%) create mode 100755 tftrt/examples-cpp/image_classification/frozen-graph/README.md rename tftrt/examples-cpp/image_classification/{ => frozen-graph}/TF-TRT_CPP_inference.png (100%) mode change 100644 => 100755 rename tftrt/examples-cpp/image_classification/{ => frozen-graph}/main.cc (100%) rename tftrt/examples-cpp/image_classification/{ => frozen-graph}/tftrt-build.sh (100%) mode change 100644 => 100755 rename tftrt/examples-cpp/image_classification/{ => frozen-graph}/tftrt-conversion.ipynb (100%) diff --git a/README.md b/README.md old mode 100644 new mode 100755 diff --git a/tftrt/examples-cpp/image_classification/TF-TRT_CPP_inference_overview.png b/tftrt/examples-cpp/image_classification/TF-TRT_CPP_inference_overview.png new file mode 100644 index 0000000000000000000000000000000000000000..de35058bc188546b8ce3c9de0c2c5c4f9348a953 GIT binary patch literal 58378 zcmeFZc{r5&{|7uJT12OWvYt+hWJ`;EOuNb!EwWBXlkEF$Dya~PEQL{IT7>M&V6rvI zGBGjsoiPT383tqKe(q^G$G4O3w?Cfidamoaet#s?-1q16-nZ8~QQ zKZ|>Idqc&#&B1qITcj_XEWWbw&r4@c9unCYe0|-XJ&1-li@#pkh7><6h+WG4g1*q% zab-)w=Z;eehva$#S!vYX`tkb7@g4+YWYUv*7{m03O5pFkJD96Z>ndA#mn8}tG!`EL zm;f8bL;_Txy(v|T5_P3F$odtr4=%c77ld?}%Hm_^S74Rqi9A5ZfBe#x2iQtjJzw$p zHd1b$_}{+RE}hrm?OS`Cm!01S|8HMjOzSJk-IUz;Kb|*FjQRkp$%6#$&-?Gs>w{~H z{NEw}D|`NrlK&@@zf#HnsW|$0nXFPPKgMMKL}y`VgL1EZn2=l!nSjqXNbR-E3Yf5~ zr)fZL{&nhscS6!^&p`+F(t_t;xN=7sqsI>8MJ*7v@UHMdk@cGs0ckY%{{40pSfQm~ zeK8t&fY$>~_S`{uv`qTFWDVIR?Ec$; zOqGYl8*bZ1j^WuBzoK-7Ezn=W9GoOoyL$G|=PUL9g_kdH%DuR$n0F84W_k16D+J96 z&l;dSCh*+dJv&enHqloR)aZ8M@EJEYfxDhHmQ;D8l6~71fK3G^H4qnrKw{zZ zEnCmHT`}JCKVA?p3$UojUQOo;AIlxAM#czw%vtah1j0Hw6_(v6e6`=lw*EA8Ko~o! zft-!uJlNy4N9%e|4e;=~@2C}uMG+Yi?^j^H9XSku`pvZ^@bKk5PmRdc?(E{;47nNF zCu~!pP=EV&Nk)d8n%hN%uk^lsi1t1)_&>bVKdvj`+AKDJIrnbdhp|=zGim81y$14- zyD>3Z=A|iYKU6K~qSW_?P~IoTtoRT*vL=RB6-DAA7HR(px%sxKMAk?TJNrVpGfbg; zTT;Vf>{mHE`qbr0qu~nPQ#W@(a%tqKWwBd}kkclVHFZj!bA>{o1!raPi6JwUB2LDhu?k76P!C1&Abd`^J4gi0zd| zs}&D2pR5^Ra(PI$b5hQE3racLios*^c&B9m$w~OWj+QzK|S{p z=@Z%1sqC;CElvL^4GbCn4A6j5COgc~(_^?V93)vF-a5ADIvj=y;BG}&~K@Le*;-kT|`#D?%uShqbs;^?|2DwhTBuo?}+%Thjy#2khm=mMRP2c_0+r;k=ak!FM zxbi~)vOcndC+X08N8c{T?^bco?G?gWky=1op=_(r0XJeXFXtX4FUfjA zg0182`0tzt-@a|7x1VNrc6L@B8~MQRWeL)7<3>rhx2#6$lAgE4+sEGtV0$GKw;};% zfajYU@(rji!iOP{hRWiwW?j;w-+FPc>a`cTz$?rv@Y+H`-AOWN#(28`O0@2W+K9wc z6jx&Xqa|!$-+CpQnwJ51Y*{nff|(*KSiygXhH|X-cda?JpJ=_3)}+oO7lz}i{u((h z+I5l-awONeYSK(GfRyBQ3gPSZo$QZq6H>m11m)^CRSk7 z0Vbz=(c9x~R?Lm6+0y7$kPYW_4(_j8(eH+^&g*bNwmyr$b|v=KN-4w^=F4~<$lfE0 z4ew>Sei>7Nan|lReMLUVxtmr`ZyEEflmcpxCtHa?o## zgnDyHXRXW^QBGKJ+WF?}=~5umI(_YmayY=k4#%4A1;y;I+g84dbaEptQ0FNPOX}E9 zd7=7FR)C77w2iG=kY10S(~3Dg;T?z zk$I;2{Qv!ju?%&32ZOVkOh3Gp(;%jtob=e|nu#FpVZ7YPJ0e{0q?k~qr+O(%!M?L3 z;?ASjq>xN*_GK6hh#ba;9rupkK+a!*4MauaxApoyN2B7O-iXw z&azqWyS6vkC6U`hs|Ov}ZV`Hq6Xj3U%lDv_WsMEY(6QEi#^qI7U3h~{U-IGuKjymd z->bz-9@6}uZ=uFnEpEmO*?YQGoHSUa(v({qF}zA!VxNXlPJH?M?9Av4RQ_(p%hd<* zvg7(=Io>Z3si^@^-)T;J=ZuO`+Q%J2Sff)6JD2_qW1HD7Y27%(YlWw>`VS6i^rix8 z>5_7?PIe6Y9z#6ONs2dHvmx-^yy{r6i4cL0O>ZgZY*N&$AVUcsl*Z&>htu?KrD3d8;-$@`JUoB*2(H3Z! zv(@XfCo*NPLU9hp;+3spnUe1$YRPc52tUVd8~+?x#DGTR=Exl@HFb_7jhniU5J(|O zmVC+XI!bxfwa7&hhRDKCWbR1H}`M z)~WBlGc}Ic5~+T_t{2s4lc-=z7)R8nve|hn|eJna_^AglMVu(-q-aHaD0=wiWX?=;wBo5CTcD>3~ zw#Zq>2?At)idVxwfxiEU+cCwYB#zy{M3rtS#vD_^!pf3FcDe+6MrLy4Rk6wQc$|8y zQAmj_x>F-TS=|I*a~auP{be6i8Z9il*I&tOjdGmXKzzaO9;$R0RMa3xw<%_UP3j+? zuBp@BK7sAcT=+<XP8c2u408 zKsRvigY0ZCrs#S=JwUq#WRIS;G@4H6rCwPuM#){rLwn}VSxK@;2}(*Pcx2`!K(%uq z9mBHW-J<+XOxRoywJR`ueqL$;=jWh7D#o%(Pxy>4mhK_RD5z-ys!_^gvPnv4h2Nub z_Ym)3g+&t|U-YiJ{$iKnHUSCQt_RM2HPD2~KjA5h^>fL%e7(c3a8_zp+@JWA3D-xq zPM5{$Aq`#Ng@T#xo-vg>LKtqxYQy(N*SHtMkH{KJWDLTziW%;~K!?jwv&ARckB$0-U&yvy)?t8k;IJ7iG#K9qFuyVs#n^= zr=aAJepLQE?hj#7!0(BMIjYEe&dlf_%rIWG1eZ1a3=zF0jb6_br??+yk6n}uC5tWK zPZiH7zekqM=WCZ;+=m`~ZC;XqS2y;Ts+(R4RM=|)(;o;OPD!|~_EEpsV5Ybpdly^U zrEoTi>NW$Y#FN!|B}7M&nNbukE%w^K{^58BZi>?NJBwFjHY&( zG}_BgNxB0qwJRBh?PY6@Br8{ph1YbVQ?9}=e&IJ=LEEAo+?@VS#*kaV_|E^d+VzU& zx2owI(%c3=7{$*Gq3@-`F2+YH7ZsZ)Zy3g;kf!@b?y26mq{F0yhZK<4gQ^}K>~iDm zTyAd|d7pRfU;op4y^PPSk4lSRY3pDf&Y|yBgij14wDO5jsebU62Tz-2baWrukS1#~ zZd|t!FMpXLNwA*uu-c0b=zxj}I~<%GsUCzr>$K`X_IP8@lFIe8vBHFlQ@>>@y7|4QWQkPqc?Re(KAs$Ef)Bpme07iZ+yDL|eJf>5%R_sk9N` ztE=5jMV#c822wDp=(?sjD>al%3HWW$>?s|({ihFIz09%QVPA8+f5%o}YIjI#E1~@i zn01x-T|+(bD|7CMIFao)HK!*f3*73+E$(|y{%kT3I>-Tm%3z3^kRO?Id!mB^1G zU4ltvKF~ieq7W~cF?cYyYiUrN%^XM0@q{<9Qg#-<1wE>(?^74HTCIasr8RHs8UMNe zO26ai=kb}_zmrgnJ0#s0mp{VpocO}cRwA4_$@C3H98~rzc&0w-rJ?WT?%nEpiLR!3 zypd);G z@MPxoOy}&EmFXX-2*njZ%d6A~QwZkxtHEN(%sCz6#_thyQp8Zp+f~#4gyG9&+1{)R z7xu}?OvfYKqHEl^c@|J{^w|$##~#l)h3}hydDpR^nB7z zQT%iFwC+@)Lie5q&sFj(>N$3?hJIGP?=t^qnZ0N`o*yym8LKxzhm@f@lZYE7GIE&b z6FczNvZm}SPvGxX8}%%%A!zCpb>;e$75}Af{VI%nvxSX#K5K=@>TcUA#np=^nyL4+9G_vPU_^kAG)7$xp1*;752mCEo>gk~-fy>wVram-&R| zU`k>S%))yRYdtgaY)FxXkNQ1Pe4@(j@)+L?=(J94$qwe*0LRiu+r4gLcq`|dfyWDbs)$p)g^idchSHp}`-^;eGh6XqHwVgj6CG?(r&{aoC9gv0<6x^2&-OCE z!u-6vwh|A@xE5^DfPQ`9zh`xbtayv-2}%)>D4A;yx>*-2yHoX@-r0m$<;fhkZxx@g z++pe+E+IsSCU)eW1c`?z#Mj1^>+Qi8O7ZV5tbL?_lx(xnH#_fZzkUyMGQz#P+;7eV zSJ^*FqHkAn%lGIa=qi>urkShed8<41$KI|wZtBo76g6zV&9SaLuW)oBER^i0)Q+}w zc@95238fq$d8fo%crb4yTr7$7x!V?lISUuvcT~|Fv!=#5p!FAda$P*{IIH5 zO&0!3ctlFB?Aj&k``Uxk1+QsGnr5mn@}VfN`}vt2SJ%St$4S03LUmlV8>nnCkjFXR z<>PBzAK{P=Wgg30N@IEMH}KwMnwxr0`Hs#c!`al6D6hW7d{}I8uUuu_R>F>GlBu8n z@H8)GqMbBgZC{8PG*r7xdd&(BpfiA}K^flQ>CX#u)nQFJ51ZlZhI%@y9;FThh*=p0 zN_$=OL8ah>BY{QrXsVBM45PkiTodQ18Qmtd?-)p&$3g_K)ehW(-zw&8R9RfB}`S2{v?^UTr!%k+%U zP~vNfXK~ozPK^Ee3q5+JWo6fmbC!}kk|}R`xLS+%sAWEvnYa=+Ea-hBUzaZ#(|tm>adr$#(l!jL`Q1-{wqi3dj<40nl$IKBaIOVM9)7($6v5~Oqf6(b=`?M zQQS5r>l8*bZndz~>yPtnD|vNVUM|F3Y{G1N3}$j*VQS8;(#HHOVap6nWs{sdiP!OL z)*g9hcYNjbq_DSxJ`6Q^{vOReh20Bj=OcG{TZYN;CU?HHS1fxso(=Vbr|0vy`QSOe zvhKJAsOu@jz4G@2YUuMV!`cjlQrlZ+X4GwsX_S6xYL{Ozu|px;B6Vo~nTCsEnSJ{K z)10ekK7vhTN|ZMcWL>+OiUgQ$$vDURSuX-{>OGdqz5co8ZViX~d(F8yHGI&h2_O8$ zZ50Bi!z?#PrCIrQmq*!b4xTA!=}30V5^(E7kLJsFUl3ZpAriGA_ua( z2x!vfr$q{fk}XALCaj-WpYfM(9e05@nO_Jzuk&(`JX&9M`0>MNea+&`kjS098@F%K zSBOTvKL4^KSlD74yZz#4unQo%mEf-!{jb<#-pR>n6W}%TY~5>%A5IXn1BP?eNx3<0 zXreWgby`o+{A2%G!?y|XfvHR3Yda?HEfBJ0nQI5$}nY^fIp1E=IY@p;PFy5VO zsCmSi`$WS`ydN~RCr$7=v=3D_2-kuC#BfRWFxxz#JtGW}))d}KC<}V6aU<^Rd7m}A zvL0I$IwnDB^#}VE_s6XLUj6@>;tA|IU^)_6`f6`&vlyXeFB${)%F2gt#@qQn%9nJ_ z5IDCN>30mfm9QRhSONXbs}x*x!;zf6T#f*v>|&XI>qI7~um?fcDr}@Ej+cXi{bkc~ z|Kb3OX>Vw&{QSxlI|7^yOiKby*XrXRDQKSx7ZGF^HieQu8R-j?r8yecMo-&8u`QRn^IAL9J0_+ciFfM_KUxeWQ@pm0ETF#z zPWjqqIyj?KYwMpJ#7P9sTQVn{kfIqEPcQ2O{9FRiMa~0bM%WucYc~~GN2uqSs8nxp4ct(R}o!#01r6^;-p?&pd0~EiFbqrTK({ z@$#hUI|qN8g{!8-@=gnv?p=y5v?`EZ?FlA#x1x2Vm2rn^+&#bu&wRjY@ojEn4&Hy{ zpy$xXN7qBtzgKiG%AkY(=M+Z%oMgSV$X1D3nfcX@G&PoH$He!)c#Tt>)`+cjOnjcgUm=o-q0-t(nuEtD->37 zrUE*O>ra*fqDuDbdIgM*g55s*H8pfPnUF+c#e*Hn|32`MKLg|@v+A!kJdKf=5c40F zfv*Qlg+V7d);+DAK|4uQt0A~|u`XD(KiBoNxo|lS+x|Sr-*&9MI>GaC*$uTs&0UED zR^=YMJ`ICqyQ4c2WjgMG?$*p`msG$!9jrD6hM8}7FlC35s{Ok`Q*MitB#yi2CPcLb z&fHLsH#-GN@qkrHEGe1&R}aukyHWNB6_6|J+EOl+d7B6Fto?qNEzt+Ir(vPj&HP4^ z$8GUV{O>FT#{HZ({-RZJqaVn|N#qum&t?=?HdajaS2+M|R(~=jHGb|)Y+dR|TR~%) zCw;gpCsiTOl=-GM*jBA|Vqg8m#6+p$fd>9=v2to+SAphEQf5Kp@!;FDqoA7&2`3G~ zy#{nch_kP470nH+vpUM_BC5|j%rjeD&R1xi)(<0JDS5cPN$og3TQ^eJzf+qz;Y{K& zJ(zLs?Apf)Xs)Gzt`<}>|Inx##=f(%f5FClgvrqt7e(`Qnm!V3(D=2{IFBmY@KlHP z5MiB?Ot59B#8T(%3xI=aM0nQd^khN$mOA2M)`u;^@H}Md7k0y}HwZV()^ywtgbH>XcoxizDy5VXCusJU_y}xHIMv zuTe*db}E)p6yK&TEkXzj~-ca)HuvE9>>%+t-tIz2mcz@M11t*@eKyg<>G$zW!E6vv^ z55ECRR71l7o>z!xO_dz3mdi>gIp)Y=0)sp>i5?~QQ>@f>XR=flyZAeOSYmeKMnz{; zyu0f%EgZ!4i%Dgf0gB@9dilNH=7XkHD~?%P#Ng69UL zi$|a@sFp%Y-uU~wszpQ*=fEhslNT>gj-V8@Rk*r&Ex%I-;5VO3P6eA&C)4KNhe0uO z1`IWb&tCLF?q764V{b6A0GsL1_x{$xvG+Bb0;huB2ofIJY^ZTq@JX@bPII{g_BHsk zff6~i%yW5kDk*tJn&EmAY<~Uvc_#A!3CTv7zPoaCo|_VH9kCt|b*o%2b%z2bI?8L6 zBl2F83OnBTdB&7UrT2|6dzPuDloIe*~Tz7^lHfSm8Y zSEX=S3gvAz-+R+K6584Wuo?I2q|uZFqjEKvFPzjVpw&eY3;YxH;cA4(35erWw37?z4~)!&p}1m{xv0j-KH8SZm;=`wh#r*_ zo__88|p~H4>(^F;ap7h>kOEYxY29IvUEQfWuaj6@j{@$+TFQClVG`Ak$vR z&7RrOj<2@HI8>sO6q{mhF&dXb)8kQjUviraj*_j+N_9o{m3khrRfN{S8aMBzY@B6C1(p{u zp$UJ)mDp@6D}C{Cj%=$KaP&<3lMxRo2icxIyA0U!GjVYerdtr(QMZc^cHxEhPst(| z-i%T~e%od=B@5IC>@vx!%Dn{moQ;cX9>98~VXxE*6L|MgN%z~?q_V4>PXqQmH~Q7| z!s}qbF0**g(u_N(C8L{jwIl<+oQjli=$-$dT!{sB52mGVWMX$g4+C1S^nwmcTtWUl zE46e`+U~U>u{y)qgxuKCuFoYIl$j;LmQ@1cHQkQ*JtylBAB+n3d@7Yci7-2{c;V@% zombxiiR?v($l=&R=qL=Hl30(m*!1d9jFy?s+BG&e{JC8cu<*LU+h2O`xwxpRcIEO$ zF-chmJ7L%wqXUu7k6*-GrzOkChil31Ygd{s;leSq2C+udGtERfN>{JuWhs>cABJ#j z)?Bg-+?z>}9qrfMK?5_!JAxK$bX0#}gS;%Sw2_XMj)|z?VmV=0eFH8Y5L;v_JLCf1 zLmHQ<9erbS)qyU^yUW6>l_24rGF2IT<8RP%KWMnbSbSF zVG`22yp!VKfvQUfYutmT1}yvrG8a7C^ByV5E>>#TcU{GdkF~T;UqIqzlE5z4cM>8S z6rpZblZXSz8n^mXlRspyqod> z)1F-_+AKsE-tfWy=$It+cK4)5)XD2(DF=)-3O!leN7x>?MLcz8AR5^{Tla@89A3!XKPr_}W(nO;MJWZ|%%0Z*^i_!%?wZ zxuZW8b|Rj>Z~2U;-5cWT7B{Tv@Tze%(=mZM{8rZ zIuTM1WW8D^XY`O~SLnyHV2CVZ9WcY~nh{FoHjm@ZTVQULXOMHKyJcsRcbYNA5A623 z-9Fv*sl2XzlczZfyRTbnfn9VzD^Ovdfk2~-d4)Jex@@3mOC!SLKEEZ*~p zXp!L)qu2Mx?`?U94J99b|(r4zAr*NBQ#Ycu;w?0M!V&l_29tgGC?I5Zw~M zUuQTYGHn-WD-&3{mE_mDDf!H4kiGw!rHZ{V-&^EjJ%5sH2WOhlPcxp!^IX^4i_Seh zj)jGhm)ti`FMVR1NFxsiH>=8L4f=!*+KQFvxSV@kpMK%Q-0}7CF*4}RC1RwLh)N|A zO40@XnlYN;*E6f_6xgS#|8Q6aO(P`j+SZR`@0m}oeL{hnvQIBCvT#N^I2hNe6#r0M#Y!K$hWVV_VFc zH&!K5ZaO5AjAw%)>lyWM`IwhI!-kg6_WJ|t=;fxP{(4B`s8c9;K29J_Sxk0!ix|OX zPEIwZ_KsjVzB%FescL1IY(TibV$%HXu&^b~*eBLctxSGNzw$+JV7i1)RH;62p$>lP zKTm3+!>rdhXq3rqbOLwMw5ef6;=$Px1C-olNkNy|EehzrS+=dZt|CUQXAwurWJp#ZT+mN?54$@eH^V z_sKMC+1>24qiMu&q|5%RrC=}9sDzjk&+mA1yZ#wNN7VR>NbtPVT_Z7>-kEffTANfB z-rLnU&SQ!AlyTb180Noc2lQ7HjlLA19g)n36$_ZJl8lfb8!uCFs+HQ_?e}IZr!x|3OtOQVykl3&+`_b!ST@f+%z$~T-WD!?vtZ0FlUw3 z-E#=JO-%(Al!C>{0U}@KoOxgZR--4&fNi(?C1#4D7})(rn4noZ_|!cJLHuB}o2fY> zzjuC}v`2zfuTZ_h+k>^eg)P%GCJs5lDi50v2$S{+Z^n<0)`XCcoSnYj-e?5!X z`GK&BY61PcWH$X}(@yAeQMb@x4rS@hhB&##?^4C*cwKu!7Tp))6Cy2OHA5319#@bNjfo7aF*h@P5a_t!dHV?fZo<6>NuRXh; zhCZ7`fzCpwP1tSRz=Xp;+%Vcos7*hgM3V$FP{X{^wEDbt-LXDwnx3kR%Wem|3%fv@ zaXB2!z7+978MF44^OY`ON(uxPQ$JsPwmTWNk3<8c+gq`>D8RqdvE#PolY*Lkcn!rQve!WK6L(D5B_m7p% z0&jZbs99Gm%jOiHv>e);awOna)2?YR*+HslK?)WXVuXP8{}`{WgohrnvUB#;C-8eR z_B|=tRm+KrLMC$B>)tXbtql(=szjFC!D8O(Wyuh;RZaY}#N!3~u`Z1QoN#1tQGzQvVjNW~~ zg`crM>S{5=j~D9hYrp?&>)iJ2HW#;wz?&-^F{%~Fv8k(6FoVrkY)a8uj4g`ixjFJe zij@48Q=`~QIJYUUd(c?>f?iyzO(sG{taEPdRzmnM-Q*!Qb}m0F7+wIPwPHsfjqTcd zeDUl%eXvaRZk9qM?ye|bi|bP>@Pz;-V=X{|L#JUfGAHqrDzG*=txt@)?>gi_jU#>} zW6HkX8_#pfhW=g%pbCR|V1hL(@XiVsXifrvg=8T6ygsz0hlXODh`cJ}|9y-TZ2$sR zSD1YUKgS6=WM#`)6HZ#c`60tvKpDHx@a}EV^as&YtsG&^@BMJX-ev8QpGL7?4*?Gj zHojA|{?NvWc19nK2jjm3ftHV`{yYo*X#rm7Xny&^csEAbcUM9`?K^-sd|>4P>o|aG zE&rX}HHV320??@LsfBM~--J^j9(RSw?XWsPm6T|`^VEZPaMqYn)6RCjy{j-@^n^eG=?4iAvIUOtVPMKOLQ)=k2oreBfNJrQ-B^u1zPjE=m3l{bH~5 zCL`HC>iu^F>>CqRn+#g?f9qatj5>KtJ*S4vyoa2>H~MN!>UXX-1KvkdcGqMv2{n3u z?6;(Lnp3&$nTajDY-eZZerWc01dwvM$l3DpsimoVum=%n_U}-SZ(?G?a`Wb`Tjq%s zq;E*?mB-bQ0BxpD!Bks2Oc$lG9RML}Us0JWqXn>lakkaT%f~bphN9<%gT7-DmJL|{ zqzbG$6x6SB^}q*Fr;R|w-w=lHn1yrsR%(}@_ju2?7#+XO^;TXchdK&vqQW`v)JX>5JvC6rpvLwWsIl8 zWlfE_HtPl4KV$O)tloE)pjCt%Ibo)QX=EybKrah-4-fO*ZPR@a^SGNtE{^#-m-U|m zhXA`a88lz_YN|fP$fRny{`0R1ZNTCzg@fDk$6eaA92TNo zHjkwG_1ekF$<4umS$NaV;(rF%bysf1z?ofv%n3{K6q1bxLL{Wu{4?LP@{~5vF#@T#@szs>t?xPK&qq(B?r$?*1cz%XC%L(YX9 zhBNFJSSa&?id=@-Si5PD#oqLny6VbHIb!wU<7x^v!Lld-1}xz3@l!F>^3g{#mP;)^ zz*+*#-W(fkm1}FPbL`&oaFA7zs;%;+Q^&q(4T$PIlDe_}TZbmifW>jj#SOsJy-MU; zl+wa%zhI%Mf#vC38k27kBbmDZ`3#5VMhYsO9R46*FXP> z>ATIFqP6edw9$|K6}sa9=xq3LoiT9t$T|>NSKVicyDL0g{7u_Hl(vY?70++FA3g$r zaKSkY)=zgn@{8q53A?z0V*L&XB!B2r1FDQxRxI-hZwJNAO?I2QPd2PJRtMmqQ+l3- zT}Z%&NW2zXPH+G_ABJ6r3$I}m_cYRoXM|l{%C0}Kv@!9%a?a6C?cO6 z24&7mFxHDgo?+Josw%m=XFjti_0BE>a|iRrs(m@`Y#)7Dt4;FUrQH{}EsevvTf>yVY=7Nllp#fP)mYq8I^F*RsV~E5V;bkvB z190((;eZ|MA-ecC_grh}2pF?np0u>6bEGQE|*L*+d|yxR6UYd1**0RjYZq>@j$S?}^%8CzN^a^&WCmt~IF^6k$bz-j zO^REpN0I859KQ;ZQ(tXdjMOLxq48d*1SZ$)0>BxNaTssQRyi7!!fo`u} zhoF1>&Jp<>>}Q)BRNZ#ev?P(L{SAHi`81as6m&_mqrk1T66<)A80N#8(p-=)Tr|hJ z7y{AUD~YybX7`1}sqhVDjgRUsvpu_5z(gd5IbIE<&2kpJ2K2l9qxoPHmr2{evyUdaJVH>#It|@X3R+*Goi|U3##8u~ z>|P54z?>6s?IqBNshqi1AwDT^0CJ>k3tp9#MD@q{WM*=|p#dMlMXRhQMn~^Ukj@0F_Y1X zkRwnUxJMyeb8w9x$@XPOBy!1`v{2Hqw-Ml^MwM<0&1u@HTE5E_R33O432XQO?~|}z z1rdpPCb^#N5CC>8QrxUmzu{#M4ZJtg2e;Qi=aJ*txXS2>7usD?)*vGJ%5(rq+;3d5 zUw4%`iE<-V1xWVl@lkRgJ;xkYKyHUNGxxJ^{P1=z3%vynrt<-o{pOR3LXc-omuVjO zo(|Us0q(VxDKyL7Kff&5XPR5d`Lp6z6L1xLT|YF=x8H1$;~$eq<2RanNdRWDL`Igc zoSvp|GLkO$0Q>h+pic{u%Ipi^QVu_VP%RIcWKFiWG_U$DULIF6g+U}Jg@cZNso+pMr~O0u-WC1ukh+s;cEMdyx?*mkFPNLyAh2I*fF&oT#!Q|2y+x>(GtY?Hqt!v zNAWaZcbu6OnDczE!31ntW?s&+d!d$zWo~c2-eYT{`4<{ERQ#P(Gw;p707^TSg)`EI zcZ_Im2cP~ImU$8nF7fbAIEaX#m72>fQkJmRxQHn;HYcoa+S#)HKOR4rj>emgaV|2J zls-wfe{UC&4^}-@@j)@Cze0^ae*~2sj6{v+fJc$Cc})ljk_P8R2dJ2RUV_zN$~vs! zWfa6#0f2D?ESEE@22N!NLJiB7F`wLs+%JA<|J}yY9_WYF;64Rn0XW^=k85AZAdG`G z0Z$Z{%n!+4JMtC`r;TTU&R%wFkDB{KUFIw1osE#sAJ-TiTLXXNR5p8wRE!a!;}(D_ zNw%}M7?RMk^lwdLS4U)|JNEw3*Ryl=CSXUO=Ngh zhTv3impS_;=&fDBGWsx99xXBM?l)OzIItffxMWxR1}Mn48_(zJmICGZ+Lg-SVx&KN zNDAhH!ZOdw;eiUkM8&>se}a+&SPh@8nos`$@!pdG^o3Sl5fgU}^-JW2yb?w?;sX@C zXKJOANSM6uKvK?H3{V^{v9Q+kb?V{jH@J=n(4L#R0uWXgNt2|3WW<=DfS`ozHB2_R zBL(LBVsTk$L=XUw)56Hj5n58=^l>z8YV0kht~ioi^z4j zG-6r*Hq$_?Unw`_qlkl_hN$O_%*W+mw46jLM15Ov@vNO^g!5POH3AiGDBrQyO18}}Mjv%7_h?Q_SbDRz61jf5-|N992j|FQ z2*lVDgqiU0LHB4NxLNlprLCghwQBHv;%?A7gt4jR%*m-W{$Sv_ix1pIkQj(U?9`K( z{7OeKBEYg2r_>p0V~~!NOiUwB>4$(QPDr{3`5CxJrdUuoFS?QZm0|EXr_$*U;H~a1 z^V4uvmT~SLu`E3wqTNK);mK)XFhDCF*rcI8G2Zn>9E`gxy}68ts&5s(3&`9pV^E;3 zHUZF0zRtiw4Lg_J?e$$CB7#3b zm6GE8z~PDWgMg{%NtbPqXRWe}d5D+E$xLM+K09^dVDiEPTIof;Zv_3iOet1&5oSF= zZF|Ki;zY0w)%6rt;B)|cfs{m=DnHji1Hhr}N>*1c0&Gl8ER%fnQ+Ckp#mcZA$q37r zNgn>*+_N6YA&KZ!kTk><#t=I?I>tj~GT$*!(DqY_2V_g%sBL3HAp53U zqkHTTI=+PsN$NySTET3NE5C{bQ_@7zQ}$2%=-Z{odEo>odSEH7$jQjU$A*3yST5T_ zOTau1RzUS9srS!!ok#i4oS}@5=dqZK|0uyetpIZIns!+BY*rG9|L`~e{&mllw$Rs= z5kntc6$6@13b+4VdxJJ(my!5Kbh3PZ;17DFf6Et~v6JAe)954iGQa$HI(HDP6bilH zboZ;EeTC|a8Ox$KNov|lB|6`3Kta`gPb?Ry?`4LBYg2zO7mPajwL++94%Vk~Mmbf~Kebvz$!jJ3_je`hBNquxJ4=>n z?2r1yp&+EfuM13BoZcHmb-+rkh*?Mx>k;<#Wdm`@1!2EB(;h2QGP2yw2l;J7!Y$My zUgV$B)7S4B`amqk&!r~pBG|Nm4BZbSH#X{#R7(};^d&HQ@ZZ%H$C>qR9A<%|pqwn* zcMN8?yaWUySHP35w35q6lCyL~3>+dyf(&_UUPd2U>h0-~n8e-X6HTlW1n>RR+P1H3 zy$w?-1I^yHsc=50@=6xld)gfvktZmj3*Oh7dcbOA0Gu< z`PPIA_G_-8nPFnnuPn9vv}LmSD`)!G9tVXZ`a4g*)X+I)KOz<^wI|iZ{PKVxPGI}N zYj@xqAbY&iInn~8OS z)wqXVb-D74bNA=_(CX^yRYX=BPIJTz#r>;Oa3JO&R3%B?SZo{Er-Hb?3MEUQN4H78 zOtiu-4J#zo z2iTvt@jb2JfW2d$#p}h%@^AdAEsMGF<)G89BrbxHm65w%6^k4EF zywgPxNok&=vj3~%zVc9Haf7ZTs^e>n!DI_G1v$T>tK%^CXsw=O4CLkA-PAx)B$=JfjVZR?_IrVFK(AM%^| z`dJY!VB*=m01M8U;|iCAp%i!q{;6yK9pQ)nOM$@8IojFPMS`OspMRMDAN~ro>zqZl zKZ*0x8uux&3O94ViTwWF1*>O29=`0;&HpCj;V?4C5Q3Nr8-M|@d#9+s{X&x(I;Vs| zB6CU@{b9G%{wbsXIO&-|EV_pN$ULJzZVTQne~#2e+2v!?dp1ENlX{T;SLXtK70LH= zwR`3R&GRx7H|>i5_@fNGg=_M;f-NBDj8#_utk zY%fg;2?!}_Md}fiY{^a2L8xkka{=(gE0gQ+bCDXl4=D*DmE<190%xE7=%?jZq7ot{ zUVdMiRzo)bS=+s9zn?`pl)MI<>^@Uxh40wHJfYZq^_))KsGVW}SS<_gM3S}Pp%T(} z6nOrwVEnVgV^28JN0Idjl@@I1JLB(n+fW3wvkmXhaouKQfclni3N?M#UeSQcKS6rW z0mJXSgu}<|MVks05k0b(9u2Tw-^XTBR9DLv?Nx*8)9o{@ecnf%coeUIW&sun{^5oa zv-UOM_DvDd4C+B!YfGO5ATg7;L8Y@d&RG=MYgWhQ>%R2to|VZdcqo~K@@tGx9od$k z>|{Yd?XnU(;)Dn_X>>!1|8t5OxHnE_ zUXM6!u7l%r|0|l55WWSK`FiUYh*OM)d@)Ef0o>&hr7fRL1cSV}=;S3%vt}jA*M*V? z41LqYy|cZ2j8VOo>RHLw8H0CE;Gtb;6bz>{(P5)y;rUkbkGLmp%I%Nf)ci&d;dad3 zQ=K5FR9_>yw02;XdUJ8@WO>QqX%WrT51lqfSp{>A0U;jR-nPjej_~Vs=oE z`Mjh^YHn(#L-W)zj7OJduY7`r1{PNJL6@KLa!?uwOSMb5pNgNznY+?Cq{)qgdX;xi zIM1DBCSFfcZ8abZb1O`|^LTz7ADgc}_gtBJUdOqVvZZs8KQ5!w3NJAoXYie9squPR z4Pmy)OLp`14qjQ&cfhSdzTiAfPqwvIQMV+!U-DF>J0E;w-j7)CALF7c0z_G<8K2#h zP|(c3CfM3aU3u+z`-GeQ)x4x=*YmfA!pJcN?uE z1^C<2yVAHlC3~cj`*IEDyn`krE{m{|tV%4RYi{lE?gbrvlO#BArb#|TQ2FvE-lD7_ zw-wxcqNw4^u0ZA2Pr{PWkL%v zp1Fwg{!+&W()$Czy#$gVj(_g@H0|y@etVYDcoJ0IuF>zSuOx&FKgEFgbQ&+XNTbDb|3&Eur$S1~t99bw zkhfBMh44Un;-)qkeTSUGgp^~dMT;eLRhm&d&x68?7*7x-J*Yq}L|}#CwWYnNA~oNL z=OD})$LMI9|I`z=sckcG_>!)?2Cjg5vhLVihaOPp^3J}kKBm=YuM77ah~K%A@+v7m zz{v{Zc-kyRWA^EgB2%p0PQUArD)nmjU~PJNa)FUQemcik)T^!kPIM~1(3E+2u!_W& z%*^Z#aoBX3^z>s@rJSn1`4q8OvoOPJwl&3Lk**2Nt+*E48S71FNe23-HO!cA6cqsm zz=51)eC;*T!FNtrNS%=jCHv&^GW)Em{k0a_``hcU%kIU!&($y;P(l0G`XwMT*lKB= zH5={ap7dM=F%;lXPNnL#8onksl?~(Eyc@WS!km*?*4mB&e^6|9PY}?y;Uw0 zl{)%dgIO^^%Dk+@sBkNfcchF!B`8@Gm62=UU?N@6$ZWHt+vw-#V7qfA%S0t|G-7R0 z7Tsg2@{*~baNcl%YEGTf$(rfv&d#v7-5|`<&a>P4U1IbKC&b|4(O>~%c_^O4FpM)9xB){w0Zl0LAfGo~|o*fnJ@w!DiX z*jlUWVtDydkw^zr+5234*7~@(tpfqR$#bT-%C5o2X0<+Yz7LoKZrs?i>#K$s(5x0K zDyMzes&Betvd+9Q*HI|a@YPIK)P|aF7JeMLSSKWWw(D7!j-xUr|6TVT!J1BRYfpOY z{EdDP4x*j8B_h)DN?zlOp_pnmtzP>vE|q0d>wq|Av`?z8&+ytQ<@;hKrSsE63adgR zW)%T^0t=Ut{s^;mK+}g>F_Y`B+>|F%5x#FQL3f1L1t>x<;EAzCTU3kw3OMGU??(0? z04Lka{~yZEJRYk3|Ksg$3AeJ;74E%->}er;-EO&A(rz77B+0(7V{VItQ1-$oLM3E3 zwvlZTnUHlDyTLGnv5gsXexF0#+`e`1{r#86Bgf}_&S!nU->>J(BP3LDPPG%QL07EV znm<3USmx!`>w`w`AS!SISs;Am_owuLJ+pDoP^G~7zf!+nP6**OXdGWNg}P5R+4;i0 zn>%)sd{5ggJ=Lz`%~hJtRLp&rtHawoWJ-$+NpAIGU-fQ#uH|Q0+^wcQuUe@frq$vj zuU6AKL1BBG3-6&_vX(nI>lZFi(5Va=7~#1JbmK#YwsFvQhI{+=w7p5R@Eeep?A>!K zEz6n6hZ6zae@^$8u)@c&X3C55nK*FM`GExXXcw?1*%2o?rioilTt9QQp=;E%AmSx| zbAZ$XeB@N|RmUqL0>4+2$AbO(7slTs3N~M-_w+2h>u+r1lhckWIj?@PL|RPtNolKD zbnUeJ+5RcjGJMnsHYjcp{}W9QYtH3+cE7BoqxdU1)2J@86YXNjY3iTA8IR)Egii8FX11Mv z|I}{}DO^HU`WeKl@p?2VosCV#rtM~R6-#rwTtm`}tb>G9b2XOu+Pcqppo^kPIP62U zD7yr;hO1>wI!wH(ofAl@sHv_-i;Sj_BSn^1a)qm1O2vZKAD=91F_(`IwA<2$X`+DY z=&kePx3)B&i#Ye*Od0JHIc&o{Y&|zGzwi8j#J*|fhxZprjHaAH2A!zy(aqO(f)c@> z66&Gwi$@oJc9&7MYG=ma%RyFIglW3TZ`gTN9HGqLiHN6cMSQ4C3k9eF)NGt|N}cYG z=D8OY%2sw|9oJ``#gr|MrB%A(Kn7GZ^aA&H`_4#556A_}2DjRcMJHGn_3N(Pk!;Vi zujt$nLmmBnYQ>XHzTQLCrEZf`>8-wP=Zp_KUp-Xqr*~E!;iI|{P%*Id2`WNmydzO? zLrUs?OXpip_oTb_{YLB{*=b6eJTj?Y3mME2`%>N7cNDjF7Nrrg+=uKBjO_DFFtXiI zX+h@I(+iWU(|!l3m+3`nP5(NbtngPSp4CZBx>h}NnNWkEX3|tfMrXh&>G^N+LVBL^ z4q4vVa2VVsq*gCv?Woqi08YVcu7YOk&`qi*}2~BGNONzk07zmWOE5vVkt9sNPNveWx7oHa7xUT zIetoBXqv@MbemOT2nz8ZtRlHGj8qoPY850rhP;h!nuh|;mE7x7pe)L`wL7}bNfB)F zRwyJ>GXg^nktKdg&%+gMe3DV8?P90Wv{RrFA~T{Wra>oN0jKR`MjfSjh!nXZy(KcZaNv^Ob zOZU&T-5IyY7ZQ+d){5J#0!INJzvZ;r?1k7vR_A8^D4MK3`ZWcPkswY!XV?4V- zdx9a@=SlJ)ejI(A(18?}3#0nvYRQ6}l};m- zRdcJB_Hb*Q!ES9J>_Yjf#%o@Y48Lj^pJY!vKW4W0BYbz6-ApRD_Q+gWDe@*Kd3W{Z zmqt38uO>tHU=-V3!%L~p)69zMi%>Etx5J?E>#`$*U3CnpC^^|ji@#=xs6E94Di07e z8a#|;Kbv8!+m!F&_NFz=)?~tKcWaiI zFZ-ALQNnJ@%q{pdZp%2Z^ee?f-DvLP7%TVJJpvhd>;1bFuh#dlUYpd$Cghtt*(ryz zTHc8^d*7LvLTNR~BdmlcSzf%&vyVae=MD3ueEjjVIvjiFt?xi&jL8r}zuF}@BZm7e zAuTS3`|9JQSKLJi0>UKDd6~D0k|EE6rLVMB_nmbve!X<=YndNM2E6#){IzAG5eD- z`%*$x%bKF%H9AO#lV(od+&rC}VVS1fb;3~|9QrwiiOYbWHrhU%9>ZMFo>&&`{p!pS zfU=3(=zX+L?Cy{@sX{q@enO@sfwSO95LvxcPtGpyVqZQ+#(EyNwYYVvwkP7@s_nAi znnVi7^97Ff2KEd_@L2W?8KiCnvuMCGx;WNc*}xvunWxyyK&0E)&~W-;6><#qxO{wE z{+_soN=VgDKQa0=ZQHi!*6#Q6h!AyebbZbc1r^$;E}-NdS$T01?1F0De9e||j(6kT zI;~27^3V2Q(>8xA(1sQZ9Br~XBf?J=7h87NOU9=a9W`*^P~4=71}cA~+zGHu z9`B2kxfw*RI?dm#DT4e-H&z~;AEeGJlkLR}yqe3W2UDA#JMcOBHjtm(2^r2*FO+w~ zeo(IK+MTs!ac(g%iyl5i%jn8Fiu84Tl?jLy)m>tptZ zwdOo@);G2^^9oFo8_yS4z=S89_w3g#4y+rp3mW}hVhlI;5-~BSWinFcQy!vP+Cs$` zAJBJcyddl+xX}wn2nZDi6`*f~rvv>Ernoj_uz+%RkBW`%_|<72Vh6S@q()K6#|kF| z-yZ^_^k(M8)rWsOOYQh*4PT44XJFdy|sTeON7}_wvsqQuIxtjnzN&;5`~f6D1C82NJ?vGf7X<|F*@HVqkFY zS@e(NbvuxVU5IML=+ddEIkua%q>j|~Cja?b@<_9wq{nW%ItAIGVgIdLFN&{RF(_t> zJA^R00q^SnbV|Mv29CJu45a3K)X z{)@7`Tegd|8T6Rfw)D10riAM|`|o|{EKqB1`-qas{Nsw@evz4a0E;v43@p&iAvm5D z-rwn7pvquPu$}i}aL+enoZ+6Y214F>fY|AoFPrZGyo(79AQw;khu%AQQO93&WHw;? zN5>tdU?Km+A`6(U(am3BBSbzj`&Cz7XtLjqQiv);C}7Wwg}@v#yPm5YiGrN`87JN9 zlUUR0DzBwt?7e*Y0|)yQT&X_A#iiY*0hw-82{~$>ICXm7y7TRCeS$iESq@Jpe0-3P z5qp9>vsL)`j6YN`?+d=G@kc(^g>H(q=m^l;61)IMA28I z!W;n@W%PnNLW}O-hN-9g#EyEN-O)JGBfgZ6_ny1S3+|0KG=m8<{DZl;| zrztN9N}9MeAdNj&5MjjIytZwTrnz5p`oY4Mu0D3z0`qWYg-`8TL_QrN_Cu)Q;fx_W z9S}}#p>R?zJ|-&Fj@bfoEncBQqsD`H%v>$j0c*+mE+R*5N?1h##XhASl-bk}p17MD zY#BI84S>^y4U=(8^#=)e(766UsiHW7w-!GEQv2mQu0LTlFvR^&5D~Kkj2V_i*`;TU zHw_1-*u^2~nJl^nltZ7Qf{UFYobCAq;q+g+A%S8^-~ z+>kpRQ?dyzq7NyI;Z8{|f=jkJ-T|dAl-pHcFkLWpa*)a2k{OSo86|W1yl$_r8ng^eVwxc1uTCh z6he%+w;q!hD&dgF3%Q#t^h8DRGeOaj6`HPy-jQw?cUcHdtNtdB0_9GffI}mQQACiZ zN#z3T|JV8sc4%A4g~9D?GV821w2Ze%VANZVzkRr;@e-UJlXnD*u_yf+*|VeooOHiR z8ms>_;DpO0>-!c=(F(gi1VjkNQfu+I3WglA@COSh8;m(Xqnf}96 z0vJKrSlinZXquBd;mz&llh0&gY%dloi2kM`;#-9*-TIg+r12RdWqBeJWfOyZ#63* zbx=dw;OuGN8+7xlG&lJ1)Bp^j*E$2PmQm}EpV7flT{AiD?<~Bx_dV#9%lw?i2lXn3 zKpPY9O~wAVM^);VP4jsJIZF~Q87Kz9WVkAif|jt36Z8bAgziH+|1W*?_EU;>kD(3+ zc^njZ5J7&bWypji_}yR3C|}Mp9ki#4sq`C=jJaRcp+?i@=(qSmSAhV}bE4TIX2hib zAoFQPU1s-qM@%w8gl1Pw%=QCO^mBUX*n%K+AMpf0Xt^{U7r2&nz6@RU8tH48e5-j<%j{oH|*L zswy?zCJ{9H>s=;c>B#^VuplV@O#pUHD)Dp!q?~q6xso$3V2Szen9&F8q}RG?7iEy< z-9R%=;%jC&85rt-j1Uz7mm{tW4`H}=nY2ecDi$YXDqc`#r%0w0cq$w^(s~x78yE+( z!%Zau2ygFPq|VD-CK*AV1nF0l;8~m6Bt$}}S&^>;Uo!`f@KzBfC=rOGBSPt|%c|}-in^0!f0sJ)QuqNIMOPebiAy}!3tM6%MX{fZFdf+~G@CvGJ2;9kT zr^Rp=vKJ|p_ykbn=BRvp$fs)@Le1-f4pSO=u|1*H z7O=RtfKxFL#0Hd{?i_(ouqOa6p%i3bNi z>V&+k8C+18(Q}nhX22>*-aXoNHJ1N${);Kq$Hz5|C?*hAWv){yWHx*^%Ff@mxacr$ z;_%*>5bBPm_HB-O0BbTm_e=QFlKO=rp=y^T03uo$Ete=# zAUZT;W%5UqlbgfododKrJG3dA%eg+sRQ>I(-i&LsHk&Pu4`G7;cy(T@^W1?`hXY6T z7Vrnfyy_1h{#}QdnrMr{-Ha9n7h3ODirI!bJ6l8iSj$F>6AI>1=ZK9C%=K>1%wOmRm({~a(-G`Mn|JzB zV8z9EZ=6Ncx662gyINP?2P6NJ1P#w*q5DY1{t=8{VA$fxh98IjRP8J;5| z@n?S)A-SGqzoWHY447D2@*R4in}5wHHy!Ha=wAwa5@V+W5`PbuGk`aivl)w6y@)%W z+x0JSY|v-(%gh^Qs~S-?^o&^^iK^1^VM$FF_-iNDZNdz7lrT97#hpC~r_7Uv&h`<9 zlq5eigA7j8X`+#bkDcnt3x!3BWlcVpYaK5svzw;9X3Pjtczp{dzUZp$(^n!#H2R2H zsJdHK`JtZqW;vGT{Za&!yR$Ya+JktrQBJa`WRA-(EftX&|JbS7Z6WjUFJ3+)O;rQQ zGl?;F!Dwt(=PU}I?;unI7bM-Oqm|B^lBc!4bFk?FFpct8%sDp%`9+I@yb+bl70CzK z;uj_{xHsJ5!swwidXkyP{d8|1bfbbiAV3WVP4u_f7Zl^k!Xj?fDT(A@&*Y>tChigs zknI4h+T=Y}ggE=qwBjKmUt%9|$yH0+HE^^BP>zH^q1AE)fD8MbX+l$?`w5rxMcdBq zGBL{DRF#Z8qVK@+0E6&g_j&(Z2BpfvGh6!f@zB6gZirDA(VMnp9z5Ty6t9_hRVF@G z*;Zby(;ZntPdX7~CiX&F_`QL>DN z?R|4Y41gm`wSwG6x<+RoKrs7`fIwIwP`!;m3qPHWWKTyo3=Mtf;DJjB9A(Viu~rTS zeA>q&bz`qxi;HCEYYFu6^xX9f{JbHvYXmD`_hNtO%vy@c4nE?5KJm@Ml+Q2fHB?_o ztZ!Ip{=qV%%x6c{OP1og298G?S=L6jpPFW>AHvdrzJJSKTOByf6cX!tc4y>{1O-*J z=3(+SP$C)*mLBx#vcy2D4E&CvEA{}?kyw1k!|S0_b7;h#i}Fj)(f8TDIsv|?cJ4Rx zY;)~rBq5j&a7|tp+SV!B$|D93)*}2|!|dopWz&PFuu9p@C2rHvI~%MJ=YG@^8dwz7 zBo6-;U@|9mFM#j|SXkco;5@ZC^*uQ(=2h`>+u-dRa6R`oY^3Dow5s>Db zIe{q6`%QvMuis?Ng-k76llr%L2=E?N7NZ}ou4P_#IbZn(Bjfez%s#QY57uob$@iNQ z3qbcJd)^&5kkwZ0!n3n|tjtLTJ?k)<#jHAy;$__bjyB6Ih)=kwwW{g_S9#P`mJn-$4}h}+63 ziV_N(2eOl%U+w-W@?K*E&s0lut<<9dd=YOI-sEn5qqVUaO#n0#{g1XF6aLfe> z3P8FmP8qycH&$43KQp8O`7kzS-81c*F|%e~iI&cQ++Xg8<7tbEJ(+aJ!*^n^tD5oR zz`Oc~@qT8Vd3FFHRaHW?awKh#&*Ir}Nr{BJC(7fMU(}`AvKby^;H}Hy;(AwTMi2Uk zE#Ubv*n zbT}-QMy?|()@jJnP+^MvI_J{NlxKH02cTRU3!HmNeX% z6(vNrbHIDBIOxo|31a1}`$XrE2?i8wee328ElQ`LOX0jKq0~X!&uQUL*Nk0Z#`f3E zn5mbDRg+3V_F4w@z7&I**|2CL2L3-4@s5fxLJCGb5(q@?Z zvwKKPZrPbb%JydZPs?qdn{|qp{iYn#um^o{vKqszL8<3SrqGEe^>YPm!+-W1%64X& zmbew{9Uh>bJ5>C~Y35mFQ7to(eJQa);0(9cGTt?+lparWk=6cQx5x80>k*}p@%EJ! zczk;ieqO+Khw};`7(FIbYo$2X*|0EthFvCf+Ga<1SAHE^15}Efs1qaQCfUdfn>wli zVn0Ts2l#=go?d+foz<6L3V;)q@fu>klb;R-bFaU|MDq}~B_$?0zP1e7UPr&MZ@*wV zs1PcfjB7S>Mu(&xcRUL|)jqW5@1ddCU#2|5IMf8mILkN6>$Ts}0mc_LHvZ{U3T_wk z8(AL`$W`pY9mI1wieg_TIx^c}|ppOThSvHjg|-9hLlvR8h8R=)*E zRtg}x1$!S0C>={4pjj)$^^j66Zd%q2Y;Q-$^Tnc&l}STv^-axLM+zo}tfMX5`@|l> z4qi8r534S=o*;_O-83u0f`TO>hYpMA4mf4Y1ZK8lCc@vlpa=oJl%^Z8!4Evy?yj!! zjd+U`qK5^csehev?T6ZYt8P|TLOc^)&P_VIFQHC;_HjBQ1DwlOo$BC%XgAw_)w9>M z?s>=#{F$TZju1ZgwA`h8gT0;iJW;gvg8W2I?{tN)yM7rssNIODx4W09^W-FKHd#DM7R@2{bfwD>bFuB)cMK z+f+-jdUO8Cx^w&qh5$K7cV-?Tcga5On$l0(Z90Jq6)-Vj{j`Tgwn5P3q)N5a5-$OU znhEt9)moqy8L`3H!#`Af<)hhx__a4~-n%cqXNl?_Uk7SoxiSgv)1xl zrHLao4?~95d~jbprvm|h^Vf=HQ^5)#Mn!h|&VmoP952Bx2CwV@vdaPDS2Atb==Dc? zVA7QXh!x%;!pL>UXp8UMwm#P%FUtfn)&oDTAG7P;|GM_cjhVa{?wz3AIUY!<5yIqu zjy!=o>H2CH1;eb49M84ZYJxl74|H|w@~nMge<3NxcjT#m1-TGt@dE$ra~u&FKi3^> z&+!Z`Y}@$f*ckx)>IHg7TyvRJh~E$WUpjL6uYtuqVWCRAg6s_<{0y_7H~o(t?w307&ylvI&gz*=-&rX^ zb)Uk#{QT+aqg(%8g8wCQySjyW!0ra76!kf)&wM!UBmZLmgIeo#(nP1N zzs@J@kVdiX`(ulRV#* zx%7Z_GQxJv18$Ht0(HeJ0|T7-)&iezOgyNln1j&`C&+`Zz9|FJ2j<*zVrxDC9||e) z<$%~(AXl5QmlwJp{l*$K9iAYqEN;nHgZ%m=`o0%uBQNA*_*PO+o$<`o&FwDo@z}$a z{sy=v=v|>N=H+HkGqW6lZvoS~e{;UK<3pB3|a$Crkde2iQ^JpdX(`2W^ z-yXhS^@l&8e-;jgY1@2wats(|Y$H2~{X?AH^1oAK^=F3eDq^mOYmCTRtpSo}Ph0|s zp24M;mJ?&iie$){pw?z&WHb&h!|`=YAdvm%$7u{BQai0Ii?v}t`~I-QpyUFt!FF_M z%`o+HW<%i`jBp3Ef=zp`2`S;Q6CiyP zM?YAfhXD*d@-+Kj=Z~Cme!VFq=6(tonCO(amstYhkU)G^S}X|zLBsZJG3mN!S;UwL z3Hh%LGqx{1EUnh+(~bBDn#Q?9c4qw9Q~Q91_7+`a17zL$YBQl3=+)4}CM1!6eVCk2 zALo_6ES9hPSy0(XV@I%|N(_$t%*H$m;IMf5=I zrcQ5JSAoL-1yJ6SP}jZS0FyPq!(P-d&=q$!&r#iRMC_~9e|gWBkd(mtoPv^WL@VgB z^#DntuOp)DvQODuU$Nx~q%r+=PoFvoHQ(JZl=sb*_sddvX^8`}_Bi7vjV?#DgNG^a zAq{mDHseoPQpQq&PXEzC%gzVC93M}PHg%HCKtyu)84lm604Oxj@_gWtc@RA&;cBkPdev06XE(0g*y6eH{+P+vyiy zda`kFVe`@w1;@4~PaAzl$jDj2a~B+R#Ol}uwnTn@A$>bH0mU_Ig5P%HRRt9m4Pp;i zIJu-`=z{0L7Ed;K5p>D>1dSeS3ID%*fg-3GpKycblUZD{cFpzQ`_q!)F!}G6W1Y`B z$xyJM;$7wv;0qs#*8jIfV_pP#7-bb(Tn?VI*KWa_a5v89{`SW@|Tn?jy4lmHAb6r2YC zAZOqB=85CH&ZjSsV#?Bs^SBFh%mA~2`WwfZCnP1nuxM%OVUp$(54zZ%PoqNvp&G)% z`1AtOn2D*(H_v~8R-d-Vk3kSf&Ub7UFR9std!KU{=^U-RX3eY?r3HJ=3+4@WVCO)A z3mRmJ0p6&rm0Lu8+{BdYUmh9`O%MzGtKSoQCS={#0D4ZH;;5!Qg4A5#w@gxos(x~)6jCtQuD|gC^oXx%hkgM z#?SPbko5nhZAnmX&E;bq@-zka#`SB?$Za5J*IMcZHoRSmHA;X0Hxg>u#3O7Yfb_|7 z_Cx~K!&e0&t9AP4GuZ?Ui1qrzBUE_t_}VpZ4{*}nMK-N{3B!dlim6*dr_zv1FTrzG zz2nlThDRyDb7|yp2z!)+ZRc{3fZQC;U#FlkH6#}h4-Yf4cCY#QT2*c3MCtj|cA!eS z0uE()@GU9dk|4~imh;^fZ|JfDQ@eB15R?F&C$Pa$qiC`U%|75hwSLWm6Ai|3Oijx3 zM%ulGs>d^J49MM=>^=_DZe%H(Usj*wR(1lY%9t$UNvCOKKDXK8F<=~$7moK-psEr` zDm^*#h>dH`)GzG9MyzNzhPwWk-LSI}I8^$dLjuY_n)fMK!80aV?;Qcq6A^sN)mOL`oD7aI*bT3zq;{Ym_@@X)GJ$YOV?E~X)APUPysdy!K#0c^I_q4%Q{S%MFZon>eQIs zazz|#*8FLdJtt34cL=t$SF$*|;5WZVgmF&@+z;!N3{If|1W4-GIst+@^Oduw-YE=3 zdB~oF@tw=jsN_sG4lKkSOiLXe{4@75Ma7H$*3uknM&xcG?{CEY%t_ETCv7`zIHruA zjRYi|?(`1kgUk5a>s=7*OnEjig?Aq>U6G_mLzVtKM8JrhNI%X&6x@UpD_Q&IqWf|K zZ#ca_O}O@I(q`cWR_d7?j^)H0R>T>)guL6bgktmfdV0X5lnOsi7$m-m+JlSu{`8Av+J z0J}gHuV?eCz%l^jYFuE)g@59XF3+I|yM>N|iS%>Xl||2Vz!bAA6@O}@L_=DE)rAOLXy5-Q&^0bXtAT}w+k7M*a0AMQy`CI1j02FJvf}~<~g+^n^w4@=k(Y- z!m2wld&2ryJMy}y@9|ao=Pn?ez_18xUOXHRe z&IqGVbg<^~xD#K9$&;BUMm1tgB^L0rLhfDL)SBq4#Med;`Ls9kNUJoFD(eL{(ST#e zL$eCFG-}zPFtypULHqa$G4k53n4B%ELv7EO2(gL;Q-HQ0f=ejw_t2SD=954*xxf3; z87*YtJH+IgH6I1rmL6K-h28I?9H_|+bkmh%wMcm7GiDccXxxukt@w!7n%=!%3(P$!;Req$qZ9 z09@tB!eT`c zEd7;f>T}~AfF>{dMSXg{0qv7FQ>|DSv>UJ`uZ)U#xv{K6CIHy+ZpBKNnhi()~I zB1>mIB8ecEm;|JBpIHB3Q7mKum?MF4XZaX>D5GPvgVMAAiKf18suBcFk=LtkVljVz zLk146x3IPMHZHWcS~Z?!p@Xu@JtyNwW$o7huxC!@4;hbroyY!a)e6<~%Q0HJl?7EJ zStmU5h@djMcI4r^RquEXg3qTFQv=h%)PCS^d6rQs$Dyw)Vh^gaB$Nr!d;W%5{qnK^ zHV2TSl}>#L?9qfo&37DM*xXu6usT>_GkdhN+{FIN z<3J@JKe(p}nw`->&m@Ti3rmQ6v5Ss@kDGrnq6{LWpBt&WJhfyrT+iX@OR2 zgAeU*+w*1k0@UY4=#6jiPWcLTP)q!KXyGaq$f;!{bmecuU~Rbq?hsqI0)ni{(oA#l z|3G!WJP{B#u>_Q?^iQ2cc#y2@n9Hvl#@0CE9s;P`#m~W;@;4wNfM)G`t#XCnJa4d& zio&9C+p6WM33`!QTA70WxDsv%qBig~7rWD_P7QjWz>5z7WOl>q%t1C4n!=0lKsQSP zB6Q^8DvXV86GY*_Tt1lTynLsxIGjKL%1Ox8!0Y=@8+!x8Yp>aRR0@v6wIGR_TUL#@ zb_lk2etnPeVbe+oqwBSbz7Bfqe1a2Z-u%-d_pW^W^J(YvLDD*~1>kvR3;f+ovW+KmU;1^_Sm_#5!; zFW01V_G|z%w+6%J(u^W2?ejPeSSIT6E}O)B z^TXbtAu^nlgQN8f6dw0^Wsd#6r#%h=wF*As(!YQ!*Pm#|&pr12KYpJFJ2-bRP|S*U`vxhfUxHYx+kg~)r;Ku*&F2_OaUmr1#u< zp=JUc4S-+Ca!)p`c2ik3WZ)gBXaiFN=DfB4`k#K~7HZk@ZJ;LDSs2+0%tKaA@2d@EzWR=No99ZClJ-c8D%g|8RcoXx zKO1MPc0>VN#@`yZg9+1BCv@$-!k0ke*+JOy(tr9}0ZTA>Q(6q%vz5t?*Z9;wEj6nh zQND6gKmy)6Y^o_Oj`3X_mBl{-d9we|cVX=12s_1eUYYs9U;X3Ik(@0F=#`lBYAc#m z$E@Wcq5A?N+7-)4?*pfY^Dc3Q!1HFMT57cy&DZ0nW(fLtUM~;Wzw@g_{kQg+u;J)> z6t9Zg_&zVyp#L(2IwR;JtlKR3IKR|}UebqiJzur%$&G54*T0oc0Y8Vk8#oL?S6vNP zEk$u~{K)S_HNclPfyB@C3()L3_%N!Sw9xa}4djfDUV?hJCn*%Z51R;Hi$BB>W$`VK zk~Uy21ZTX~Z`%4A@Mqs(RXGo3L$FyASoF@aIBHo!7NZXx_25i9gQ{wSg0S7ivxb_klZ!1G% z^^Ex1BPbY-H^@oLxY0;LYGLvJ=}#F!JH$`tP7UBS(RHW)(_cx4;6(TpM@LYSd@}Uv zp3jThKRQP3fTB&D0=RY^sPYA+^d~|`z!YCuU{+7NuZ@Vn-3TT|OUnWNYog}NfAS&1 zEFxitoXf24o+ZT0J%#_&yz>xj-7#@E(=%Ac;Pua5&)k&&PTt_onxoE5v2?xGrhV|) zYt5hQzUk990L>BYd9?4_`_f~9nH_<>#H=J2&JKsJJsJ+slw7OrV7?kjAC8(5sRzmKU2o!=)wot$xqa{HEzqrvoa1l@F0sfOp}-L1i^&=@ zIi?KHiA*ZEZ)ofRN=mvY7Y0oT8M;)xUBF4u=Ad(`cf8XpQ)GL-t8wkL<`TK=P0r>4 zHNl=@%G`457%QG z6^}&eU#8w4$!nOwS5*zzvYkUtN5sjVtd}LFC`uciHA#sw?=>u|)a)Xh<@pHIS8sD1 zx&3PiB8NbE?biCQT$qer_TkdgWmx@lD*$v@;C4uyx48pYpvqOx?7_&ij}PApN?hjv z>TZSV)<>yc@W=qlwVUP$abq=WZK~eJYZ$}!gpU?erwJ!DnTM{R`lqfoDz)Zx5VtcM z`)FLzC<6f=w#{QKd{1a=oNsWI$>Kh{0qGS77TD3-x&E@|5#O&OGUq z%u*#UyM)$Fg+;r)U|L|#)~{f+Hx^*t7m@2 zPnTu)->sm#KMG6QT5HR{cT1nBb`qY08S2h!zayu=s3Ko2UVZQ7H$#3C;-cD)9Jce~ zfzx49#T38C{LR)$WNNLP=###0xd5J%fq;(He!+seV%56JJ62B~QLzW3dk=#iIfr-R zNKbO2@Wdf~TIjjYH`rk_JeGb1d_k=luK_zZ>CB_V;R?Ps^Uf|xM0N#I&OqaG)5&?| zQz<7Xrzt~xRgCNctrJe5$oQ~_-E&Z<`VC*`W@uq*z_!0Cps`2NA5`clEgQ{)V)2Z@ z%-@8EI=17df89<$CxE!pP3?#Xz{dWFLFxB9ayT-G&Di^NQQOB$_dP%MBt&(+|IspK zUo7QCOo)bH7T#LXTcH*yhpE_GxOwmyAA2AaTj1O;r=21A+RGm@WwNIgV|I@l20_XK zUBhxxhKgb&TDkdt+QPHz?_zGUy|;~@A9bs16QFs&Hl4)0Pjk44nm&9B$3b{l(l`7g zL}+H=-%Xg;-#|k}yODu{u5&fo{NJ*+Jf{nRTNpR)UYC0h3b3<|uUaHr^uzRjb=QA8 zJfB){-h6My2Wlf4NkTl3jY@ER2iNadrNsv!-%}EM_&@!fBY3$=#fU`N)zbR@UF|8uJDs!6% zv={Qo3?{427bHaQ3q>Tf4f+h7z>_sJr%%QwH`mNmf2aR#Ar9`{Kj3`2T$*(o_fWsA z5PI?xV#u95bAfi(WA++y6N2liF_V}3uH9KLN9}OoU~F%(%!gDnTwfkj1NC=Fb@i0e z%f7iGv{IMF@IYo!x2c9j>6V0Gf0{WkMbGrFc5A8}_wNFP zrmKzdkHhArWwX40)9Xpdt)40AuC6l+^nLUrB5(Kk7cmV2o!vSe=cx3!9$J7!KKjs7 z%lwq?VQunW0`8=r5)j8y%$Bc&+T_4nZgE`baz+BR;Say8Bl{Aurh!x%TmnTXt^= zJw#4P-IlEWyVnhqrF4XHyOA-iPYRMw)Do!L%^$q(g-2k=gl;D8;(amZSDdB#OH?-z* zdV)qJJ!*gb)=qn46gr=wN&;`)EOR zE>tq*8`*%0kMfmEN&8+8_(46aVa2%sJ)d8pYVlKN&cKj;aMI8P`ljWRhI{2{f>B~y z>?owh zJF3Q*6IVE57c%NJlN=L54y1yAUM}6XLoGI2ety2p>)Ui3fLvt|-^UBGuCfkDM zEzez$y#&DBn(Vu`)#A>prVyLjArswzIcu;LPxFE%H58}047`nJTOR3DV5`owe0R;0 z($A0-FeXCT}LMcdqh8)S*R z*!GX3Mx^KCIdfR-KpBuYM`mga71KgVYEwPvej7LWu9f$M zi37cAL9%@t+gIG4*Fa{v`U#<<0NWN|h|)I;vF>qq*VpC%llpp4k9&eV-!LDNw`<(7 z&{D>dfj(`rD3$6(yyZ141FyoG?|{a+eu`zD&~!2c%C5ngLjx?=Fk;mmJpZfk zh4K0Cy)h?Li%qV}0(tpK+(n5L9^1Ub&63Dup#v`2=~g`y-;le0W{=I3jXxdFUAAo3 z&k3=9zf?ec!O5~s2@af_2I?&z|`q|YHWOf6?` zvnfn+M&Z$!8ZU)RIYm?6M9Gx(VYm&~r%g7oMM=%HjQ?HHQkDHEo$fZ^_zXFvj z{WLu`%BMbuk6orW-BY}9Sj!&MkqWdvo<8ArlWoCBVo&DAAkKsYi3CX;La+4dwr^b= z@}?nBXCTj7=LFKEL%92UI|?1m7kbnMnl;@zQfmFkLQ3Xk>BO>XWs#n3(^VOH`P4BO z4BSP(+a8I0dF|{Y*L^7xDLETIO_S6;o`WNs?$;(?&!KYWpeXa z8CeunI+ge%z%T)xO?K3t!Z7U2*un)+p?JG`C@1_4)$=*iNQcA4-Dd}DLc8Ep4&u+c z?~Fxl2rjNI1$LBfQO%&Tf*T^X0m9pac_MkYJ!Nj(J!6N{^`P`X&he+(={Z&@vrl4D zVjB2AEiCb2RTns{?7U*S2p?VajogkCk7M4O_I2C|z;Fd*mdtKWLB#+H4c*K6gj>xS zP%fLcMEH$TjS1dq>XFetbt{Z~{}d>C25%NNi_x7f*u|@cKI+l@W-zi|6vz>{+optc z=^6tJ!UJ@iIcp%kty|9AtM5CjZyo@|c{kLAiX=d{w(kd9oC?IES8B|s$B?X52%(ty zMIpD3*wZ(YLgn(VG^)Q^yy3UK`03~O$r}*tS-4A|a;jptz$LIz-#VB#iLlXt5kmwX zVqsGBUrAFT?suRGV|m0rM31X?vAL&2u}wUV`r0vk;@w2CP#wyYWKVFjd3~4@F!Md^ zz!l4)&HlAGMnWPN30R+ru2FgVlEp9w_`2}8pYeEP(~ul3r(uHAwj=i;W7=o%Ds;Hb zHhVoG`jv4Yj=(|G7t0s2J1>xt8jb6~}#x+&x5NG6v|)=^KbZk<*!xQS1QFVr8wD-Uk;5nb96A_2f@* z=4%6R7R{A8RyAj38f#XR3B)QP{vy8ZB(Sp)brrj>E<3_e_O6akf9)5az|O=DKQ?(v z-l0cpEHLxJsX4k{*nZm7mP3l~iPe$Zy#3vphIfhCa-#N6V%jlGwTKYPoW7DtMtaH z$OAT=Mk*s7!PHRid+&>qx+iAd+7Yq)Wor>c+brA2z|la`I;YI;SxFzj(gmnW6|jsm zl+-D&a>l8}wEP72OlrfdIiOa2@19=(hbGgX9~q_WZq}Tau18)`wdveVY|SE?j<&EG zC*0gkwDd6c6M7SJ5n({cRvDU^Yv?$o>{6ySWp;&F!TXEzs{y}?fm64Z>V=%A^4+e7 zBX+6LUR!Ynna+=JPPCTYaGL4Dt~-K5yfq~%?I}oCoY_sBSuj=kDUz=mn8)iC=!5;`vDn1n`8cZn5C8)cr6 zO^yBSnav}TOu)MQXFn1KXlU!1uAyN^_1CgqA&Q!WSP+!H4gVq%W7dW9ZRoS1wM z+>NvC1ITfCrS&uWekryIr$=)EaIdIrih!J{4X|L`yg9H5R9iv42H8h_&UaAnWj^B` zUW8z56*#GEg{YF(K^ZbvpjBB+?do(m2*#gf^x)8FgF6?}VjAcG*PHJgG*M#Z+kX8viw>5@IN*PRue zT&UV%FrVOhsi{Eo34ImuKX~hhFvN>G0x5^BfSW~;UHD+RZsT0EU3O)kIRI!hjbsH@ zJG?P`n80ElS^*VE_@#A~rWJ?;+$)%!av-2cdG;|VV%_@W!IPQ z#gI<2fO*4Ncru72S=a%!HA49axdv2)G*DA=( zZHHH0821x;^M&e~HGpPxaDUzZdwQq+cl};556=Xl_B?>n6Y8;O7&~W@ zJkmEo9*h!CVEW1AOLwJE_=*@!`QkH$F|xcbfDqOl0fQDiCO|vL6mamq2oMu)g|L>q zNjq9L52m;={%MhX&eBhcQTJ%OYR#2l)}TlKpa$TJkve-EuSVNGHZd9TfPp%UH@duS zM&6pmLh9K`P7;q3+B}?ml|y~DjqR)8vAfSMU_O3rgd5Uv)Si#QWMOT=B08YFsGl z=v=maJ2NiQOc{%)9`LpDPY+5l-4)gD&yw4#WhZ9%yYH6Buz5+xgKwLZaLw>b^qU!Z zvS08|v7<5i7ga%}|C~O0R=>eQwtopX6lGj7Uv=>VsFoKzCa;ECbXm#Af5WQ`hWvfn z9D?3Kz(0K#|1T(juRXin>$pv8jgZ@*MVprrcLAZ`MPCs{SBJlYV4o$Ggm>0g7O;;t z&-}mkt~9QxYumSvR$hxheJUalxYnoCGR3RN6o{=>tV~521Y{Bwkx2v$Lm;&(s0`7n zfIx^!0SQBxrvOnRT2+Av*Y)6ZrLCV$H_fIA=&sKb%`=1f3U- zj=!kV8QM_@lx4WDV@j-x)<|Dkfg&zVIBXI$#`YM5k+>zoTyVH5bZ@4R^)EHA^>_hr zM}7PFGas98TB#-T{Iq({c;{G$wm!;Srz`Rmtz2YRvl;;ka_cdV*m zN$mEqUC78k@?WBe-MJK0iS&GwP)N1v)kVk0GV|u1VPHZztx=Q}2Nm+eA>Yh`Dxis3 zQkj9RC8D_E;YNQ}xqf`}h;)Apv%LHBrWp)JxosbUEai6kek&Ey3wVvm1}5srMC=l3 zT8$j;mdt##Wl`^mzvPLbeIBJbB$Keagiu3RxXuJ~E^H>q4F!Rf&=i@+4Cy#R1Ebgh zDpW!6JL%)aIKd^nbS~*^A`dZqj!)hRpAu{x&%ri7ip^CBpUm}g(Eoj;4Cc-#B8m1l z&{L4fdYb|SNPJ8zzbcVomdLp?H=Q)a@QEhgtnE$!$+_)wJ)6SUqK6U_gL-YC_72>y zkT!gKkpue3tqD0z8^h_o4`N{^&yDq0lcn4#mz%$|Yb2po-9c2!8>Ssb!f z>!{123vF2$W;xgN*bUS$ItNB-C%H;Ijqnw*)M{R&+t~KaxS2GgE?p0>@-GYn&=$3>>hJ}};!8H^kBkAq1if+d}5k}X~6 zz|Q>4+p*hq6j8$sFc}>!2?}Y5-nSTf&Wwni?5E)klqbv@5@v^G8}X0NX4!B)r3Sy} zcG8Ot?KJ|1d)KR>z2d!j80`_R{i&6~cZOXiwAdWZ9zCZCip0J)wh5+Qj`j4`De* zJ;~|vLSqZ9OoGP3z8xdl$39Kiq7rYXG&d}wPL51+wVd)7vb}jt1-uv=8^xWTIm>k> zP?sjo)4VBSwSg7X=lqm}W?306wv2?;abf40_Srim3R^!B91(&t^KTksEH+i3jrYTdemXcS5jA$!KXdS9NHdK__^?MXgkl2O*dY6jOA=|;6N#e@GS(EXiOk`5 zof6dIoiwE99{Wpb*sHlHJfVga!vb5I{~@=FPG_QWvI9KXAkvdfRTbM3A$s%u&DdDf z>`w7>Mz$V_HW^PQE~!TGsA+8XQFkI31ZCI@2Zy6k80V^@f{{h78vZ&9X#3*{`}qF` zzM+K_+I^oSZtIV}KY62G8aDkU$;`|&JuGb?G(kFK7?Q3!kHp^Lz#&Fz#Dr=q+;P0H2i$BVaLhlOiR-DLEQ-5|oQidl z=`!!zN$hLvnKYy+gBhXJ zt!b^U+?U}VakepHi@LFKDg(rt^wo%hBRaa7$EYe}rqoFREe?ci>23FyEKGk&hNkwJ z#D^48Ei|vA9S*lsn%+O4nY*SbeM>oY?pcG+4D@WF& zo;a<0K?gPJhoTv^Fy9D|cB^u-RSJ&E8`V0BbL3V&EULO(blk#obkA_`hHexxjl8H~ zWiOOi)~?~(2)#}1>(_ARpN|pxO};xdtY^naFPCiF+tq{XwQ7ZuO^)ABu;rbeWV#>n z2I2%#jjWrs^Hq#o@%VzkdlD=AlJufi5dwdL5QV%K)@1JeBfWQ1P>lh)YHLDi{QeoU z=Zl-qven{KK!sflV`JR*F={W$V~`qMPEVX-WZ1+Qs_YWBkND_u^A~J4Zh~NYUJgZQ z-4KcQXy|cufArca`8g}QZ}VmCj02o@HF8UCLQ$7`?GDEV`jm@{&YL(dKN5;XTw0y`r_HGxPKr!~tnmunvF!W5Q zPF>lAF~H|1kxqXTqf!fX2CAHZAY1Io(?3%>)w@vq3^U9+~ZuOBmNDmJG)4rpgu zmxm*}gPO;5usI5=7IxGlrjF#5L6g@1l*F9ii6H1JLetF4X_SCSDJ=H8^0=bK$0szl z)9Hg+;drYtr^GQ@lt&+!Vi_34(sm%_v)~=f%+J=F88y$Om0H_FDYMqM2n#te#3)PE zgW@=4?v7cL;e@<7&%DQsf%a8l()+|_+SbC4I&hPG5Kc>$Wjsr0Q8-gF$v+IE4*Yh( zSdNR(bB@ZI6$}~LXUa8S3Wy_m)Z7C!Ri33dwip+#ZG>KP|4ddlS07cEA7P2?iR75K z*!o+hb!z7YG_!plcZ!0rO(xbicC)nyb2Mm&#rzQK6 zZ`R*V8WB}u#v4taA1HF7jSi>+2m{~1=@3O@njLGTS;{A~!di|%hhaMD)XZRmFsdZ~ zhIP}PRAElHn8Fj;F+92QfLpbtVp&PbC*yw{;PvjgYj9`F%!Lt>e7Hf1f}^?drff}< zh}*%qVmt{(q0ekp&GSOK`4&Nr7CP6!2?Hm=bf>w>`{2LqjdCUP8HZ&(!7gFq4LA7S=>U-@E`*WL9y zuE(mA9y#c(HOq#hyz0}=L*;Ju)7?JYJxJ^Nj`A)#{a8--lRPi}lKUnsp+@=C+IM6D z!0ZQdH8O@?Ojvk%U1G8b_SQY6ozZQA3PC9~RGhoSF?)X7hS+c0WL#n~V zZc=f)c(_Vh&Ya}!jE2t!(nG_~AS=W%6khH3sNen<8Yn0iNqZ*6IO0HA7!@}A4^{Fc z;04e&CvEm*9r9I7ORo8*){L2gC^{xiz6`Obp{xFx_nZh#YRl*~l5;b9INFSD$bs+# z^nNSH+lT+7<%$94jfgTalieUxmBD0F0!yO<_zUI7b3K$&XAB%0^$9;xKqMK2ZXtrtxA6 z#xCx*XXsv|22w5}>b2E{(?U4VE~v<%_*`srNN~FzG-Na036Ng_uhPi8?+Rfd=u=8} z)xQan>n0y>?h&W(BewzF|Ac94nqu%?#s;e8N7}7!ISIk2c!)|h>{J@x!Nds-ZQ4G@ z=uPl2@1^&y6XR0rkKMVvF~~4YVYOM&vJ1h8jzBG>&$Kf1XNBoJ-6`$@B{o3{zuC@6 zcYLhyfjMXZ9VA|a(fN1oqzpLh3+6Qxp;I$&$Egp4Dd0pb#~3O&ohV6tR>>XHVoI$NB9egTRj*W-W(R*7|$CY^GRmus=a&ry+)O3I?U_w_-znp+M(u*e8 zq~dcfJ_=x1*OAqznn5GCg!gZ?O~16n#X})L34wh<@b- zsLeT6M#5+600ZT$GSJ)H?=Xs)ndbIDzJe8yr0Cx49HHP1{9^Nvi^%<@`rqV^U?{VHA2mm6slSC=ae6eaYV6PsA1W9 zz?u`*cJ8qM2E{0d<2?u}}^^;6DF#ceFSV5hyFXn=L&fNOLvB)+c-Y-Q#3pRrxh zWr82oEDV=0UcU_=vC6y*lnMAxC+7nVMt0hB#Urm?KbEVA>yNR91K;djbC;Z9HOsB+ zl~Ow|#2$S^@w=#bD^PGDW{mAs#rzG4;`ocYdujw*aNNvbxd~ zD`L6UX7Gs&);(b5$3~aLQvg1U$LmEe$CJ+wFdsrv-{16GVdBYKlJJqlSWz`{Wnz79 zkQ!OvG52)y9>M)?HlDEi;dh{{J3 zIk~4I#jIQvk+^U1XgxdcvLfWs+R$z7Q_rvwd>NnTQg`h^W(pEo@Nx$W{ z66XRu8eZNPYGAL(iun>QSQUImdCLETz2cSC=l3hDsN(b@{x5x%ixfuQ=GhkE7;R%E zbcM~Fhmerf*}P#%!Bif&tm^KH+=Id7vs{(4i+C}Ux81L~6XYwp-LAw3zn{;nY*lAC z&md+7D?7|1=`UD3K)}7t^k5J3!1>e)?v&Fe?4@w~m32eF>g!iucb|MyxODGPe16W49w3BjeGjAQ6xqGZFwyx1GqkH1#xf$InjP3*&{qUWQ=(mQQKMHLLjw80Rb^>n= zwv2PD&-Qy|UiJ!WEQpybaax(y2CS~M`m~F3Gb%hXAa^V_e(?by=2Bhwb`=9R^k$p+ z7oc&V&y_b1qh+NrwN@4@ zhE2E-dwq4oZdmZGl~|8+5N8ieHUN}__(1v~*oEZ_!8F&|i4{&5D^LCKu5G;K1ieQA zbV&xsm;^kHC5b&6*xG#; zkr>uLaI^DM-^m68kQ=cfDMha>%^Z}a{1G$v0?LF;=%+a|KwQhMGf+2c=>v>fHs6=j zh7p?CGWGLf?#KKTAGxAU^-BD^tLt&ZjAA$#d)cloBxEgQ0934d!Npa%{f=!_1$=UI z+CV8#2ezc$U?}HLigdG-zgpe#L@H!U;bCj7^~T(-*A!!V91(j|!qG8y<_Jz~8hIV+ z(28LA7h=D8|4?Fu&0P&bJi8pHMPG>B)M9ij-gXiE2d1evk;Sb#`>21a*y{Z3=#WUM zyeRv%me3RZ3RjhIifo^GqBN#KFdVF<^n9XYSDQZ6%X&jGCNC^du~TIeqQT;zZ_JA& z!t%F2wA-xS*AU>L(yl)bqCQw5b-UgJm`}pVfNz>!R#j_p@3wNNCB&_+eMgvi%GGm^ zkfzE^{us^@BALz~ugA+md90|o2d`Y$%L{i_*CK)MK{}{c55LFItIjZYvHzq$r@*6u z_3PGJOr9ryXB^+u<7i^HR}LO%D=43Pc~L!S`NnU1$qQz|E9LlA`P94N>K)4QW>6W^ ziBK6pY|}4-t{b!9rB9u$5vvmaYB?rD-vHNro6d^n#^)wY5EfFFJN|ngZ_U?Q z661I3Bci23u&t6aBDoN+=a_3A(bEdtI38cFaT){kz)S9tRx(KUSdFR|1%aSsIaKlj zsf9gq@!%`>^qKn8_l_GskTP%rnPCa!M4w9^`F&*a*TAyqAE3iF%pQ1Ddw)o2p=$!N`x0WyAnosrOP7l}w~|SrmH#QG6sf`E&4BqL5sCv}ywL=R<hSPBme*a literal 0 HcmV?d00001 diff --git a/tftrt/examples-cpp/image_classification/BUILD b/tftrt/examples-cpp/image_classification/frozen-graph/BUILD similarity index 100% rename from tftrt/examples-cpp/image_classification/BUILD rename to tftrt/examples-cpp/image_classification/frozen-graph/BUILD diff --git a/tftrt/examples-cpp/image_classification/frozen-graph/README.md b/tftrt/examples-cpp/image_classification/frozen-graph/README.md new file mode 100755 index 000000000..6b3e8ba4f --- /dev/null +++ b/tftrt/examples-cpp/image_classification/frozen-graph/README.md @@ -0,0 +1,106 @@ + + + +# TF-TRT C++ Image Recognition Demo + +This example shows how you can load a native TF Keras ResNet-50 model, convert it to a TF-TRT optimized model (via the TF-TRT Python API), save the model as a frozen graph, and then finally load and serve the model with the TF C++ API. The process can be demonstrated with the below workflow diagram: + + +![TF-TRT C++ Inference workflow](TF-TRT_CPP_inference.png "TF-TRT C++ Inference") + +This example is built based upon the original Google's TensorFlow C++ image classification [example](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/label_image), on top of which we added the TF-TRT conversion part and adapted the C++ code for loading and inferencing with the TF-TRT model. + +## Docker environment +Docker images provide a convinient and repeatable environment for experimentation. This workflow was tested in the NVIDIA NGC TensorFlow 22.01 docker container that comes with a TensorFlow 2.x build. Tools required for building this example, such as Bazel, NVIDIA CUDA, CUDNN, NCCL libraries are all readily setup. + +To replecate the below steps, start by pulling the NGC TF container: + +``` +docker pull nvcr.io/nvidia/tensorflow:22.01-tf2-py3 +``` +Then start the container with nvidia-docker: + +``` +nvidia-docker run --rm -it -p 8888:8888 --name TFTRT_CPP nvcr.io/nvidia/tensorflow:22.01-tf2-py3 +``` + +You will land at `/workspace` within the docker container. Clone the TF-TRT example repository with: + +``` +git clone https://github.com/tensorflow/tensorrt +cd tensorrt +``` + +Then copy the content of this C++ example directory to the TensorFlow example source directory: + +``` +cp -r ./tftrt/examples-cpp/image_classification/ /opt/tensorflow/tensorflow-source/tensorflow/examples/ +cd /opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification +``` + + +## Convert to TF-TRT Model + +Start Jupyter lab with: + +``` +jupyter lab -ip 0.0.0.0 +``` + +A Jupyter notebook for downloading the Keras ResNet-50 model and TF-TRT conversion is provided in `tf-trt-conversion.ipynb` for your experimentation. By default, this notebook will produce a TF-TRT FP32 model at `/opt/tensorflow/tensorflow-source/tensorflow/examples/image-classification/frozen_models_trt_fp32/frozen_models_trt_fp32.pb`. + +As part of the conversion, the notebook will also carry out benchmarking and print out the throughput statistics. + + + + +## Build the C++ example +The NVIDIA NGC container should have everything you need to run this example installed already. + +To build it, first, you need to copy the build scripts `tftrt_build.sh` to `/opt/tensorflow`: + +``` +cp tftrt-build.sh /opt/tensorflow +``` + +Then from `/opt/tensorflow`, run the build command: + +```bash +cd /opt/tensorflow +bash ./tftrt-build.sh +``` + +That should build a binary executable `tftrt_label_image` that you can then run like this: + +```bash +tensorflow-source/bazel-bin/tensorflow/examples/image_classification/tftrt_label_image \ +--graph=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen_models_trt_fp32/frozen_models_trt_fp32.pb \ +--image=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/data/img0.JPG +``` + +This uses the default image `img0.JPG` which was download as part of the conversion notebook, and should +output something similar to this: + +``` +2022-02-23 13:53:56.076348: I tensorflow/examples/image-classification/main.cc:276] malamute (250): 0.575496 +2022-02-23 13:53:56.076384: I tensorflow/examples/image-classification/main.cc:276] Saint Bernard (248): 0.399285 +2022-02-23 13:53:56.076412: I tensorflow/examples/image-classification/main.cc:276] Eskimo dog (249): 0.0228338 +2022-02-23 13:53:56.076423: I tensorflow/examples/image-classification/main.cc:276] Ibizan hound (174): 0.00127912 +2022-02-23 13:53:56.076449: I tensorflow/examples/image-classification/main.cc:276] Mexican hairless (269): 0.000520922 +``` + +The program will also benchmark and output the throughput. Observe the improved throughput offered by moving from Python to C++ serving. + +Next, try it out on your own images by supplying the --image= argument, e.g. + +```bash +tensorflow-source/bazel-bin/tensorflow/examples/label_image/tftrt_label_image --image=my_image.png +``` + +## What's next + +Try to build TF-TRT FP16 and INT8 models and test on your own data, and serve them with C++. + +```bash + +``` diff --git a/tftrt/examples-cpp/image_classification/TF-TRT_CPP_inference.png b/tftrt/examples-cpp/image_classification/frozen-graph/TF-TRT_CPP_inference.png old mode 100644 new mode 100755 similarity index 100% rename from tftrt/examples-cpp/image_classification/TF-TRT_CPP_inference.png rename to tftrt/examples-cpp/image_classification/frozen-graph/TF-TRT_CPP_inference.png diff --git a/tftrt/examples-cpp/image_classification/main.cc b/tftrt/examples-cpp/image_classification/frozen-graph/main.cc similarity index 100% rename from tftrt/examples-cpp/image_classification/main.cc rename to tftrt/examples-cpp/image_classification/frozen-graph/main.cc diff --git a/tftrt/examples-cpp/image_classification/tftrt-build.sh b/tftrt/examples-cpp/image_classification/frozen-graph/tftrt-build.sh old mode 100644 new mode 100755 similarity index 100% rename from tftrt/examples-cpp/image_classification/tftrt-build.sh rename to tftrt/examples-cpp/image_classification/frozen-graph/tftrt-build.sh diff --git a/tftrt/examples-cpp/image_classification/tftrt-conversion.ipynb b/tftrt/examples-cpp/image_classification/frozen-graph/tftrt-conversion.ipynb similarity index 100% rename from tftrt/examples-cpp/image_classification/tftrt-conversion.ipynb rename to tftrt/examples-cpp/image_classification/frozen-graph/tftrt-conversion.ipynb From 54347994a8cf2ba419cafc7dcee0733d69c6db4d Mon Sep 17 00:00:00 2001 From: vinhn Date: Thu, 28 Apr 2022 21:54:20 -0700 Subject: [PATCH 2/4] adding saved model --- .../frozen-graph/README.md | 26 +- .../image_classification/frozen-graph/main.cc | 6 +- .../image_classification/saved-model/BUILD | 50 ++ .../saved-model/README.md | 108 +++ .../TF-TRT_CPP_inference_saved_model.png | Bin 0 -> 26036 bytes .../saved-model/image_classification_build.sh | 37 + .../image_classification/saved-model/main.cc | 464 ++++++++++++ .../saved-model/tftrt-build.sh | 13 + .../saved-model/tftrt-conversion.ipynb | 699 ++++++++++++++++++ 9 files changed, 1388 insertions(+), 15 deletions(-) create mode 100755 tftrt/examples-cpp/image_classification/saved-model/BUILD create mode 100755 tftrt/examples-cpp/image_classification/saved-model/README.md create mode 100755 tftrt/examples-cpp/image_classification/saved-model/TF-TRT_CPP_inference_saved_model.png create mode 100755 tftrt/examples-cpp/image_classification/saved-model/image_classification_build.sh create mode 100755 tftrt/examples-cpp/image_classification/saved-model/main.cc create mode 100755 tftrt/examples-cpp/image_classification/saved-model/tftrt-build.sh create mode 100755 tftrt/examples-cpp/image_classification/saved-model/tftrt-conversion.ipynb diff --git a/tftrt/examples-cpp/image_classification/frozen-graph/README.md b/tftrt/examples-cpp/image_classification/frozen-graph/README.md index 6b3e8ba4f..a6b25bc39 100755 --- a/tftrt/examples-cpp/image_classification/frozen-graph/README.md +++ b/tftrt/examples-cpp/image_classification/frozen-graph/README.md @@ -34,8 +34,8 @@ cd tensorrt Then copy the content of this C++ example directory to the TensorFlow example source directory: ``` -cp -r ./tftrt/examples-cpp/image_classification/ /opt/tensorflow/tensorflow-source/tensorflow/examples/ -cd /opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification +cp -r ./tftrt/examples-cpp/image_classification /opt/tensorflow/tensorflow-source/tensorflow/examples/ +cd /opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen-graph ``` @@ -47,7 +47,7 @@ Start Jupyter lab with: jupyter lab -ip 0.0.0.0 ``` -A Jupyter notebook for downloading the Keras ResNet-50 model and TF-TRT conversion is provided in `tf-trt-conversion.ipynb` for your experimentation. By default, this notebook will produce a TF-TRT FP32 model at `/opt/tensorflow/tensorflow-source/tensorflow/examples/image-classification/frozen_models_trt_fp32/frozen_models_trt_fp32.pb`. +A Jupyter notebook for downloading the Keras ResNet-50 model and TF-TRT conversion is provided in `tf-trt-conversion.ipynb` for your experimentation. By default, this notebook will produce a TF-TRT FP32 model at `/opt/tensorflow/tensorflow-source/tensorflow/examples/image-classification/frozen-graph/frozen_models_trt_fp32/frozen_models_trt_fp32.pb`. As part of the conversion, the notebook will also carry out benchmarking and print out the throughput statistics. @@ -73,20 +73,20 @@ bash ./tftrt-build.sh That should build a binary executable `tftrt_label_image` that you can then run like this: ```bash -tensorflow-source/bazel-bin/tensorflow/examples/image_classification/tftrt_label_image \ ---graph=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen_models_trt_fp32/frozen_models_trt_fp32.pb \ ---image=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/data/img0.JPG +tensorflow-source/bazel-bin/tensorflow/examples/image_classification/frozen-graph/tftrt_label_image \ +--graph=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen-graph/frozen_models_trt_fp32/frozen_models_trt_fp32.pb \ +--image=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen-graph/data/img0.JPG ``` This uses the default image `img0.JPG` which was download as part of the conversion notebook, and should output something similar to this: ``` -2022-02-23 13:53:56.076348: I tensorflow/examples/image-classification/main.cc:276] malamute (250): 0.575496 -2022-02-23 13:53:56.076384: I tensorflow/examples/image-classification/main.cc:276] Saint Bernard (248): 0.399285 -2022-02-23 13:53:56.076412: I tensorflow/examples/image-classification/main.cc:276] Eskimo dog (249): 0.0228338 -2022-02-23 13:53:56.076423: I tensorflow/examples/image-classification/main.cc:276] Ibizan hound (174): 0.00127912 -2022-02-23 13:53:56.076449: I tensorflow/examples/image-classification/main.cc:276] Mexican hairless (269): 0.000520922 +2022-04-29 04:20:24.377345: I tensorflow/examples/image_classification/frozen-graph/main.cc:276] malamute (250): 0.575496 +2022-04-29 04:20:24.377370: I tensorflow/examples/image_classification/frozen-graph/main.cc:276] Saint Bernard (248): 0.399285 +2022-04-29 04:20:24.377380: I tensorflow/examples/image_classification/frozen-graph/main.cc:276] Eskimo dog (249): 0.0228338 +2022-04-29 04:20:24.377387: I tensorflow/examples/image_classification/frozen-graph/main.cc:276] Ibizan hound (174): 0.00127912 +2022-04-29 04:20:24.377394: I tensorflow/examples/image_classification/frozen-graph/main.cc:276] Mexican hairless (269): 0.000520922 ``` The program will also benchmark and output the throughput. Observe the improved throughput offered by moving from Python to C++ serving. @@ -94,7 +94,9 @@ The program will also benchmark and output the throughput. Observe the improved Next, try it out on your own images by supplying the --image= argument, e.g. ```bash -tensorflow-source/bazel-bin/tensorflow/examples/label_image/tftrt_label_image --image=my_image.png +tensorflow-source/bazel-bin/tensorflow/examples/image_classification/frozen-graph/tftrt_label_image \ +--graph=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen-graph/frozen_models_trt_fp32/frozen_models_trt_fp32.pb \ +--image=my_image.png ``` ## What's next diff --git a/tftrt/examples-cpp/image_classification/frozen-graph/main.cc b/tftrt/examples-cpp/image_classification/frozen-graph/main.cc index 5dc34da18..5248d143a 100755 --- a/tftrt/examples-cpp/image_classification/frozen-graph/main.cc +++ b/tftrt/examples-cpp/image_classification/frozen-graph/main.cc @@ -302,11 +302,11 @@ int main(int argc, char* argv[]) { // These are the command-line flags the program can understand. // They define where the graph and input data is located, and what kind of // input the model expects. - string image = "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/data/img0.JPG"; + string image = "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen-graph/data/img0.JPG"; string graph = - "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/data/resnet-50.pb"; + "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen-graph/data/resnet-50.pb"; string labels = - "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/data/imagenet_slim_labels.txt"; + "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen-graph/data/imagenet_slim_labels.txt"; int32_t input_width = 224; int32_t input_height = 224; float input_mean = 127; diff --git a/tftrt/examples-cpp/image_classification/saved-model/BUILD b/tftrt/examples-cpp/image_classification/saved-model/BUILD new file mode 100755 index 000000000..2bc49d38d --- /dev/null +++ b/tftrt/examples-cpp/image_classification/saved-model/BUILD @@ -0,0 +1,50 @@ +# Description: +# TensorFlow C++ inference example with TF-TRT model. + +load("//tensorflow:tensorflow.bzl", "tf_cc_binary") + +package( + default_visibility = ["//tensorflow:internal"], + licenses = ["notice"], +) + +tf_cc_binary( + name = "tftrt_label_image", + srcs = [ + "main.cc", + ], + linkopts = select({ + "//tensorflow:android": [ + "-pie", + "-landroid", + "-ljnigraphics", + "-llog", + "-lm", + "-z defs", + "-s", + "-Wl,--exclude-libs,ALL", + ], + "//conditions:default": ["-lm"], + }), + deps = select({ + "//tensorflow:android": [ + # cc:cc_ops is used to include image ops (for label_image) + # Jpg, gif, and png related code won't be included + "//tensorflow/cc:cc_ops", + "//tensorflow/core:portable_tensorflow_lib", + # cc:android_tensorflow_image_op is for including jpeg/gif/png + # decoder to enable real-image evaluation on Android + "//tensorflow/core/kernels/image:android_tensorflow_image_op", + ], + "//conditions:default": [ + "//tensorflow/cc:cc_ops", + "//tensorflow/cc/saved_model:loader", + "//tensorflow/core:core_cpu", + "//tensorflow/core:framework", + "//tensorflow/core:framework_internal", + "//tensorflow/core:lib", + "//tensorflow/core:protos_all_cc", + "//tensorflow/core:tensorflow", + ], + }), +) \ No newline at end of file diff --git a/tftrt/examples-cpp/image_classification/saved-model/README.md b/tftrt/examples-cpp/image_classification/saved-model/README.md new file mode 100755 index 000000000..716050b50 --- /dev/null +++ b/tftrt/examples-cpp/image_classification/saved-model/README.md @@ -0,0 +1,108 @@ + + + +# TF-TRT C++ Image Recognition Demo + +This example shows how you can load a native TF Keras ResNet-50 model, convert it to a TF-TRT optimized model (via the TF-TRT Python API), save the model as a saved model, and then finally load and serve the model with the TF C++ API. The process can be demonstrated with the below workflow diagram: + + +![TF-TRT C++ Inference workflow](TF-TRT_CPP_inference_saved_model.png "TF-TRT C++ Inference") + +This example is built based upon the original Google's TensorFlow C++ image classification [example](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/label_image), on top of which we added the TF-TRT conversion part and adapted the C++ code for loading and inferencing with the TF-TRT model. + +## Docker environment +Docker images provide a convinient and repeatable environment for experimentation. This workflow was tested in the NVIDIA NGC TensorFlow 22.01 docker container that comes with a TensorFlow 2.x build. Tools required for building this example, such as Bazel, NVIDIA CUDA, CUDNN, NCCL libraries are all readily setup. + +To replecate the below steps, start by pulling the NGC TF container: + +``` +docker pull nvcr.io/nvidia/tensorflow:22.01-tf2-py3 +``` +Then start the container with nvidia-docker: + +``` +nvidia-docker run --rm -it -p 8888:8888 --name TFTRT_CPP nvcr.io/nvidia/tensorflow:22.01-tf2-py3 +``` + +You will land at `/workspace` within the docker container. Clone the TF-TRT example repository with: + +``` +git clone https://github.com/tensorflow/tensorrt +cd tensorrt +``` + +Then copy the content of this C++ example directory to the TensorFlow example source directory: + +``` +cp -r ./tftrt/examples-cpp/image_classification/ /opt/tensorflow/tensorflow-source/tensorflow/examples/ +cd /opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/saved-model +``` + + +## Convert to TF-TRT Model + +Start Jupyter lab with: + +``` +jupyter lab -ip 0.0.0.0 +``` + +A Jupyter notebook for downloading the Keras ResNet-50 model and TF-TRT conversion is provided in `tf-trt-conversion.ipynb` for your experimentation. By default, this notebook will produce a TF-TRT FP32 saved model at `/opt/tensorflow/tensorflow-source/tensorflow/examples/image-classification/saved-model/resnet50_saved_model_TFTRT_FP32_frozen`. + +As part of the conversion, the notebook will also carry out benchmarking and print out the throughput statistics. + + + + +## Build the C++ example +The NVIDIA NGC container should have everything you need to run this example installed already. + +To build it, first, you need to copy the build scripts `tftrt_build.sh` to `/opt/tensorflow`: + +``` +cp tftrt-build.sh /opt/tensorflow +``` + +Then from `/opt/tensorflow`, run the build command: + +```bash +cd /opt/tensorflow +bash ./tftrt-build.sh +``` + +That should build a binary executable `tftrt_label_image` that you can then run like this: + +```bash +tensorflow-source/bazel-bin/tensorflow/examples/image_classification/saved-model/tftrt_label_image \ +--export_dir=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/saved-model/resnet50_saved_model_TFTRT_FP32_frozen \ +--image=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/saved-model/data/img0.JPG +``` + +This uses the default image `img0.JPG` which was download as part of the conversion notebook, and should +output something similar to this: + +``` +2022-04-29 04:19:28.397102: I tensorflow/examples/image_classification/saved-model/main.cc:331] malamute (250): 0.575497 +2022-04-29 04:19:28.397126: I tensorflow/examples/image_classification/saved-model/main.cc:331] Saint Bernard (248): 0.399284 +2022-04-29 04:19:28.397134: I tensorflow/examples/image_classification/saved-model/main.cc:331] Eskimo dog (249): 0.0228338 +2022-04-29 04:19:28.397141: I tensorflow/examples/image_classification/saved-model/main.cc:331] Ibizan hound (174): 0.00127912 +2022-04-29 04:19:28.397147: I tensorflow/examples/image_classification/saved-model/main.cc:331] Mexican hairless (269): 0.000520922 +``` + +The program will also benchmark and output the throughput. Observe the improved throughput offered by moving from Python to C++ serving. + +Next, try it out on your own images by supplying the --image= argument, e.g. + +```bash +tensorflow-source/bazel-bin/tensorflow/examples/image_classification/saved-model/tftrt_label_image \ +--export_dir=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/saved-model/resnet50_saved_model_TFTRT_FP32_frozen \ +--image=my_image.png +``` + +## What's next + +Try to build TF-TRT FP16 and INT8 models and test on your own data, and serve them with C++. + +```bash + +``` diff --git a/tftrt/examples-cpp/image_classification/saved-model/TF-TRT_CPP_inference_saved_model.png b/tftrt/examples-cpp/image_classification/saved-model/TF-TRT_CPP_inference_saved_model.png new file mode 100755 index 0000000000000000000000000000000000000000..153881ad3b94b621c2fc1d79e03f65444eb3aaf6 GIT binary patch literal 26036 zcmeFYXIPWj+6Ia`h)A=7h=^FgktQG@(nc6%5P=yTh!7wG0tzIQ&7D|~#vaDdO_JGKLVXB<*h1pe6qzhZ32SI~NB0{Gz<*YjrQ`S^-r zh1MMSfu9Av{;-Dg@kw@X{@HS%y^+bs=WJr~`}w~E?U@;<{?e{mRGgd4W_qfQ7?-YH~oVNE~yv>ho{kMw+zHkbI z4o7VDzT*CUTmRJ%z!%0A|If(ZUFQFc{3CY$&y)X^ME);b{41OO|2thAR6!C}Cv)kZ zJi2G|XsiK@RWYRur$F(35iMz2X=Z7fDPYfPZ0El!VWJwh97@qF30a1*M`bpWc~^!1 zRTKA3FjP<+Q+7Oh?YN}S#&|66Le>?9R zef*@pw<^@P5G8RsDbp*Azk>uTzs=blyvIDJ1|8=PwBDg-UCv#QOEQNiLw89 zOo-%tY;|LZh5kpLQcxgsrtyBfTyl9f`noa%d7m@wYT!;-uAa#vjtp9UIoY#0{d+k; zS$VYIAl`A}4knpUVsYu*pd?#V9d2+%Rmi|zH3?DSsLz{GpuetHpqFe2@-;RpSy(pD z@Vx63iPetED>v#ELQ5UZSKf`#-#7s_(3v|6r!do=jOb+$b2{*u+)uAzpBcxwNuST zym3UXe|Q;BJWpP4lcY~UHiCVeVfd%`jl436qy#N0qGJ;qHJneB?77&}-K+Xk@CztP z<#5(#P}Dq5=S;FmynOQgcrjI1dwW$Pdn_hYMY@OYf5#Dv5^Bd9u1MO}2{aZ#Ke+QUN zfwSzU2++zk3UeKEs>w~E;~*AFT+{o+~g0uOw6@QVRchZv3}{oeN5!2_1n1@_gBea`K?mzMKOn@H^A zap%HoDBgO09ls&d;|Jjf4$WXh?>= zMmG-1@$GV*`weTA;Xe@aV*L*V8gmn8KCCS0FbsGNMp^Ry3Vx~;eJ1-{O_ zrj&_X?Lc4T>YjLL`~#yqwUDC@2cMJ-X)ROK*WAJvIB+PVfS9~U7=5|k@W;1BvFANX zfkmB{*~QKh@)<|=if{WZw`z27KYOa);{H<5r3100-$DAvCur{SjD=g*sKujL6Fxp` z{YBTVXETvfJr}qMBP0q6z~l!^!>&ob0~TCjD?oCFmWGwxWBg+H_*AzO{!&UWu&&na zpUL^zsHRABB>6a7fhaB$%g2}MT}OATUL_ri{hgcW@@d|evV##1A`3?S>EaL#^Rjn z4&a+Z0WNh_s8&Hbibvj%WV#oB+iS%6JfNdWR9ofbq;)5cnxAV1d@#^>2VdK%P*VlE zUdIrWx8%8@*)Kr9Gxx{$jw`EYeT@wS_VMAg4e(r%0N*pqf{m9Cg6b<{PRp#18%wRG ze%pAp{=Ec|wazP=29V&16L8vJQQnkoe5B-aqfq)sDbI&XcY=O0#Dt%L!BSCd%5t-= z+BrHOUulzZHx@~OeF?GOFk+6*+YGQzTgYUOwWldv5;JRjeCKLf0_F<>EY1js9e%hJ zmhnFD)%SkzQxlRGyD?|b{Mc9;)(Y7#19Z#%W6SlYUoA3LA-aMK1H zP7{Rp>5vvX1ISX*0Z5szXn)v&@P2zz};O&@*0dF>NClRd)6EIf%O7>ZNJKAUR7nAqfn(X)B8Ri z``*hY+Ap!g>aX5ZOSFYU5P4hpNVNmer|5Zb z^V}(wlCSQtFnPqvYWAwB^_p+|!-^O8`L+0-fiLUxc9bAE^EV;AfA~b5kCOE!wuGQ}h8g zIp#kOd)RN`u}jVTO7B*_XP-MP{x%}$u*zOOpYi>t$4IVKF>#dU=Bu$WU$&;dQuIK3 zDlGIrwq~(xjF&QUbQ0pby{Csd5ImJH5}JL?a|7U6o=FH4sfD3!8i(a&W%%~-%R+*_ z<{q+wB)eXwioPQ~qT) z6OsVmh4Unk{dZccdi?t{Xz!YS|L#&X02tsqxzms^{~aGqT=@QsqXPDF!QQ(R)PZ;t z2Kc5ezx}mqK0)FeSq8Y-w`WLb&#JZmaDn(ZF}??P<1kgx zDSi$6e!Q_C*){t6GcJf$U0(PWn38MXj;EmVjhIWcU-{f7c zmFl}|4l_g56Mgx+FIY_-{dzw?f4yV9`nSys8eUENOOONO4$K^6M-#V=2kqco#FPGq zU-))nu_N-~h?n3SyZ`HD2*m9*B(y6P@Axl%GVVrV*dH1lfD3Z2PY)yS6WAFuzmpfu zgQoKF&1s&5l(jwITY!BL;6EZZZ}Zu~`HbB_+R74McUAbQ0EBrttL!{(0R9OBocMaJ z?-#XLhRGgZGJ%z2lE69fJ@YjO-mmXr4pr*}7X&=u^7|0Oos+)a%7gcI4|L(mK=k;^ z$jtxxcOc;|t#-=sUaG z^}|88(8A+9eWMk>$If^DGxvvU{`*CNzVow-KjNBCTb3p@cr^NZynLsx-~Nc`Z6rZi z>bSV|w?r`ZgV_8L%QC;2|HET2-+9u>A5qNbV@T4fxFG&5!7PtG_+;K&6;#q!dh4p1 z+|!@hy-z~OApj3_q8-1P4e%l0myH2CjBYa|u$ms_Ff!oh`1U>I4!-1d?)e`7n=Yph znBsuUu6aKQ?D8NNO*km+x#gKo8CBa*U+?+%P~SZ9F(4S`w#Y)30rUQ;8;%&1{T|I< zOa>hFwEhv=6ajelgQz%+&Nm-huqEL=ov`gmP^xt35PW!U%;nBt!IhL7;#BRML-#zk zRW~V6xbf^~#oi$coTJ`MARo#Y@80z_PBwRSg0}6XIB<$xs}pO$=i5iU)?1Q=IHJ0U zoD>JV_ZI2{uK%vtN9Tdu`dn?F_25oU0imM5r?jaHDFeYI_dRB$70sG1&QKo?FpFpQ zhUhna*sI{e+P}YK`uXZIaYTKNY3z7^@z{XazWOKd72&@(jlr`0`LWE4BzzmIhgqrQ zp)>)ZTG`#2x-WHiImWwd-WLoug5A@VsU^D1ziozF^e^>Qz**@JQsQt9?|r!_3cX85 zje=`(!hNhRM^Y1V_N%YoePHbyQYZYu{;DD3krrijUkG9EcAQo7ntBu><>p59xnk+3 z63gX}a^hy*^(d=deX`f*Z8?lvFs|9g1roWD^GP5}*c=m|H%oJkPB|doP1N*|l7-Bi zCuN7lVYohwQ?fF@6#>VHYO`R>1n%JIhoI^%o8uv?*JvC?YC&?zko6$4J*;L_hgsAj z89ZaML@l_O@e~FwGT<^px|q2a3hXUa_jatGb6cEUcwR5pyWE6qx4TG_MXw9H1c~Lz z+*biDn2!ru4luM_3<=L~d&XEVpWQ;e78!>*bjs+l-kWl=(jUQU(R-qT&}auGtb4j@~>hp&5ZN5-#1prw5R z?pM1F+7`se*8`f(an@BzQpGM5&4%>S*()W^UQt2Su*zV26bn}3JeIz&G?n90)3~K% z#@Uci{6-ka2$20jBt~IGhvWp#b#QNx{_QxikJ|v-B~2M`hF`>&@H>a9HzncebYHZR zEr(%*ulUW&xUSiPLY;PdcOe^uHW(HAOs>BMcj{M`7!nDnoQdDxfgb8obKuP4rMWt` z{ROCXs-RNfnL~C$oN&greX(5O5M3I@b%`3}uW>eYz!f)rSbJE?=4q`OJZ8dxNM1ne z=B#dvTi({EO=&3#gDdwU zSI3I6PPsI`*iy2QaGm>f8zt0hYHQasXuOIV7$!?o2y6TksJ509mq@PGK30B|#}5RY zAFIw@ud@yGuDhI;itFN~LpM-e*AZNDe|=J~<17EPIvpkv+FrJBhlCS{w8`1`^;9g+ z<56YhJ0$uIOIQ@s4QtVW3onU!C)j4Cqmh*!MQ-bJ_{g&5Wa3ACSTxD2GW^*%m>s8N z-*Hs)R?=WB6^4a6osE4+Wu*SmK539P9@P6ffTs&*nETk)q$_nq1`S@j^Ce{W{3B!q zIRl6Qc*ioPl3y#DsBs1ifnu@ua=ks)KPmFDnXqzJDR&vqkKZE zszGzF>=tQFQS4h)1&39DbEKA%_TWZsz}a%p!scS8wRCq44r%9xS9*2$QS;*20oAkp zZ^uk;yE}l9n*C0465_k*J;9 zoEe$>**`^q+3>V<%Ct|RV<~k#Al)vMNyECJW(TA!@-Ox4k2NaM-?`Vg+e_`7rVpx~ zU069?O2nA=SM^NWPS~qAcpy%F#xDA2hO{(RhOIL-EFh53jbA}9HI?uo=>doB2CCVs z17<0?+tNbKmjeA?V@!_Fq%0Gj11Kx^Q$n0da+1@lAdA0csi|ZUk!KCRkl1$%$xrkM z8fi>+3bR(*@-?JmMI=w?}j|L)Mzs zl*ll&|ArsCp?5>8dHFM0s&cTQKyN~|yk~`c-2}B79E~=X8(>(bcvh!HATk0$I^=1% ziNik5Lps%dBem!kpe~NJ${On5c9f+}i9ScSQoZ-iTdz4DEP=%s%*=wiH5AL*xAOdl ztfpGGfZ50EZ%t+Go}y}=3?cRSFeK#>s?*d{wB%~1qgY}kG}Yc~8!IleR8FZac@Cps5rSUO zsrER}s6Ko-#y=}bDANtprBTOkJ~2fUN0+D|O(*{(N!ZPf8iSktSdFFH+x%qhAAw{} zPsR82Kt||Ph5kf_Rthp8VMGJmd}kWIyMG821x1eDPiWwNfb|XHiw@H5v<@Id^;gnL z8mFUe^Ih$+LvqBM4uuuuX=}(}&u6Y9I-cm*24I-CKtZ@b0$n&bA4(DR@~`Af@J(&6 zdc1R&3CUF;NakEdf@E@!#~%XMb>4Inm?DOxa@;*(BW=7|vdRH--ay-EjLmuX6qo3> zS#a5%kjVaB+tZM($u4;4LiSaIaD0NTd-M}psO^?vq2p1Z#9YK*Ppaot&U!w{be*Q z=_bJX>_7|BiWKf8E82&8m}}j-tigDc61;HdIq^a<2;y%OD;gjy)UmuXY$C>uvuC(} zAYPXGw4rO)$#OrqQ}ER5_2uE%&luA;{Gg_mju!fxkO^4H(LPWhf0`IaE=7qdf`g_! zr-atNn_jF>0#OF>1G;6dNkPY|wqnela^LRpEMa>&`CvyIo0Tg4xwN@pPqBfXq*C{x zj26rv1H|%G>w>^ufj40jgy|41j_x{AkaH0Pz=>c0(F$JJ(Q88L+1ZiQf4(rnBx=7b zdc(>;QJUsNuFRbnz{f>@GM!z=e~4#ZKdjKJ*?aeh)%BQt7dif&{I*nchP<%R#n{C& zK{6DAjkjYozX7`Wi5zt{h&=fqbGYh$Xd>3p$7mF;T$aWnG*rDI^xgvk3Al%naXf42 zSktG%T?dW6Ag?vqp8%hdA4zMEe`W7flvD36vVONmYW(9_h?~o{tdZG~R$A8_LnmuN z7AW`ywP>9_G6iMtk7}>`!$YL(^ms#3Go0JBE4?PKhhy(&31?(UU-{ zb1EYo!LzckU~2Y3@Z6#>-efZrUHg0or}Yw*XlbyvR2v#CrEog5Ms*~06{^Rh3{4lU zTvy~mUr?!QEk?2Bam{*Gf-qx}PuODFbLpC(y+Vy{__Xw!u@q#Y)cagc>MkM9xy1ti zOlUq8w}e=Q=6z0AKDLP>bWs055nvKSV)!1vD&e)8GyhOre||hp)XjAad?BOX96GbZ zU3P<)-)1hACJoBQ!b;*W{_cyXWpk*m2M)z_iW2v*y+H8Qezm-4ogCvqyYnVs8{H(G zg3YRP)iV+inmZ}+Byf(^e{_$KCM*`it8j%bVwKumNXz}O5=Q2ji#K#j?Cq4R$7)7M zm^AO2OC|6!g}o)El^Ao%v}a8|f3W^aQ)?HVBCIU3q`gPKFvL5$TrtQaIgX4<1bXhb zKu<7G0M3e!!#tm*I0%3@m^W|OKN=y~>GC5V{T(M)T2{S5*AyN`)U@a2w-6HY2ZQ$@ z;cLuG(4&mmoy7i~u$pu^kym*B&d^HjPu_jY<6`tZ-j_=-Gk7Reyiqe8pQQ-|%E{(pVZ4k907Nf5MaMfD5~YV7DZk&ex>0(0xaSZ4TZYk;`t1ub zLNd|BiWZ41toD4OKn=Q%XlxoDabBT*jK0PCs%pvI3u?N1EilPhDPTYXmvLxl_x$d8 zKQQYbSwK2K7JUNeWev?7wW6|BxvL98t%6I8t`S=|r7LOR{6}B}kdpCJIJ>D;xIDXW zW%<_qVQek#*c)T;^YJ*$Qf2#2Z3APC0y8H$8XWYLS?TQGHQu|15WuJUH3+C8db>%p zvV(1iK=;vT)7~CWkL6G?{p*^qGx`$s9y2Ji#h5DS5X!a7HK3`0XsLzUQ%Hqf0z0IO z?dGcG9uo5&cAwCF_|4hE5tcV((ayEq&jo{AHBw|$OiPZwblyt5ctIx!m#9bq>X>n80-`W;QKT797Hq5ExwDyd>^k!5}Py<5Km!tzkr3Rz?Xc+g`a7p6YeINewp($bpH#rT|y<7EUn zk!{pH=Y>0nH{VU?D%qynbaHNl?=}cif+chTOA$V5H?pHl&aT$x~Lx~85-(h=Y^&6bl#3!S3K(rlGPtq z>5=;i?mqhuo@$IEdoNlVQ8dy#G+$XiZda>0!EIUdNouLgnqs`38DUVyea#6tA(^N_ z3$-nU3Rcjxc(5Nc@gw-^%_8;7CzFvR`= zw1qi`2=r_8f_Ei;#>EvVAt6|J?z>i-L4escn=oW?1Fka)HFp^(y*B(dTw_&S>Xh45!Lz6o9VF!zF4$9tjt4%ZLZORBPL%oeAK0fH;8VsI& zdI1Mj;U`z)dZsJc({9PhdaeD3qL<(Yg}3(g_@`y}kWDBqK)L^hq+f_~2B4L&XK;#H zSs?BGsQ{Ae9Cc5to6q9#QShz`M(B9w{LZ4Kv+tc+_Rm`*bQ1e(s7!;3;vT2{>OIx? zEbBOW{|H^vJHnkgw^`=(n-cp!*Aia$;i)|MfnRA;ojL%=M1VDA7yuMeogt}5+Rz|_ zII2No2b?d{MuxVjeYz_Jv$T|%<){R(KTlQOXpb3zaYSAhq=(5 zr2-{Z(+cNtgjOoH7A<84x?~>e6V0zI;?@*1*}gmGxfRzwjyKqGO9`^oHm%;4#OOt3 zi`}IsD}_ul7&^1FSw`&6mh`K3BUn$jiqN{|Ye50@o$p4B2(y#t@hcxpgmQB~WrnIS=j_JBg|pjc}OFTKOW&xLpVm%DDIZF4*S+<6Y? z&_6Mhi<|cx4Y8D4)<9eo4Y14-b6;ot9olldgLrx!0U#~QFT~ATl97HiShW!qt^Em~ z+4A5at}qOvGTvHI&C^~Mar~56cwnBo9ZVmxmHA!N153`5bzbVe8zY3?N3cpDZ`qdbQosn^ZaJ#g4&ZO!=d(pKSKt`#!)t*r z_kHbUvZQ3uwGb?QJKi1IzMt|@R*<9UuCH|lpAM;Uw>kK_TvM@W;9QjpyNwSFUO#m$ z&apx|y@cO(C{AU~!?V+ZH+capD6grn%7_tY*=F&a~A z^PE}1HlZY*Smb(v?w21VVLneL-<(QuH{P+*_}gPyh);ymIWn;pa}UgJ;WhK5)f`E# z-XN&C*wW+9m;O|qVc3nLgl5-vZ@pCQNLq>5P?O!`Bt0$47t{u_h(Pc;*RJo`Jzp35 za>owNTVDm55T+naqxtBew{hgJ9LwCF?ZfX0<+-O^%t|~%1r%J`_xrakZb=9oROGH^ zrFahLmIGlRGwV8d2&GSptUTpHSj@(qj#LtMp)7TU3QY>OroBeTazA$Bpan{5Hhql3 z8gSYVbTDy-CY5Rt7-NlxR%&~lE;AkOs;0loG{fGqK4n_M)?5Z)S^jyF#8)O7pu*rJ z2B_2Tp>J0_vH$JW`R!a&_nZ^Qx;~wF(H2K`aBp*Gx@v0)kJx6w@R&+9 z^h0Y^((LY)wGlubjn8Zs-(X&bk&&qYi+UOz0{+r)1LH2z9qx&vU-;{N z((_y1tpG%v-Z~(UfUjb@$3Awp8WO7M6H~bnOlOFc%g)D10Q#8dlsTs2(XF#<9=lx@ za>$Udw9-dLiihf&0!=8s5STH1)F>bVEzys)AwMK$?CTH!>&6 z8JX%5wqAFFRz{*)HaG?w@}a(g<*L|{IM23$G;;&BD}neK@ybD(q|4yw<7bRvW%dFt z!LK9wMz(s++-GsP_59kY5=PhXT%4;wvo5}cKTQoJ%Fbc%fofcFw)qHG>S=0F|E+aZI zO@LUbRYH`tMJ_*nkpdqQtu#%C>L7ubAo_CF9@4 z;$A6m!M4|E;-`fqcIIAIQ@}-m1X&t7AS)T_%HN|~9b-1BTYIp?scN;GtbCB>aF!-!;4DybAJldxkzmbjD<|zRk894%8Tz z*dTNtiqR(m6=o0#tPzrBqF;24nTAmM%&p!UU$%0X4Cw2;xhEhT5UL0#hDZ}XEK zxPV>rb+R;fF^PF?qNu^zERAs@$=3035bbd!)6ihhf73aB`8NkDiz5e3dZP|hpcxqf znVd-w(GAICqkEthUngfhneXFNrpeitQylf#Ddc3CwPAsYL?a(^fcShw?!1MXyj@YZ zus*i7YL8&k=V>*u*Knx)Qr2{rZ%7MIr`0?&_p-hFk^s3{Kk(LKGC*|iI(Y&lA-5L+ zar5sMzYTY|5sd9mNSl<9(J8AHMjjOzfHh9f^xawL-J~4DZck$ETwGUE!*NCglD-&B zV=8{u<4KAmO8FYGt7<+KHE3VLNObeEZYV9%l>ba4tgc&&3M*rZTAY!n2|>fI$T5P`jVB~{{4iFrEQHit8;h~V0nlOR*p0fILBAi14hatCC-E00#xS_F;rth>pKs0q*iy=`4s@hYY+e;Vj zj0n`Q4}F(sv6Lj3wl-P>_FvA+kF-~qIlS}v8nY5^l@eNntEvFz4|i)eDOhg(a*Ezm z?)hGTU}KA{o^Jnio}@JsQxnNp_B6+wdC2)oCvvv@$Z_bNXVPUVBA1K|zCiJMIn0@m zqsWzA8dF@JPVV{9DlpFYeV5y?RHa#JoQkL{B;VHZlPaWkouc#mFW*UIot-zo`t zxH;y0x2Od;lC~G|cb1f{g#LM?Z5%^aI_6l+*`pi|D}bnzve&ua^N=KSn2+x%Fe$b@ zyML03y(;qAP>z|`l%#r#f?qJ_4XbLDGEtDeAT#;3ipRFO6!gJS5Ss&T&t2AO% zG!1=ew8eJR(y6PyM^FEZ!N8YNP9N+$kQan_hx_ax^Sel@l8M(XD%bcCMQ07EBgk=t z9}v~4N28-pXOLBbrj2<;2`vnH-7<~pMgQ`kbBh}ZQ7UfaL_Jw3E>0yzjN?eG-zBf@Y5YzAx#{kJlFEL!40!W2yvR7QBLb`>lAYO- z%*H6{>D>6s*i=u%sFX{6jC9t=uP+K^eCkQ!J>OF!aGRBgiAAlA(i`N_v`avO5|)_H z2>j&vJGa(<;^Ic1$oxZN)Jhbpw=Fhm#o)Gu4)aWj-;@=*1I;_bL1ij%4)(d)Z?r!P z>W@H1zpp!^>~=8uL{&vk71ArIZ==&nJ|tHb_yHTNw!0*@iA-tJ2dzw9uM zB~O>t-#| z5u(p*@MhvS;3xhOE@gIF$U_;sD(!k8V52aJ)K0tfKx61%M*-UDFSg()MLz1pBQ+HoAd9fh4R48oT+VNkTx? zs5BkWzQ5`#aka=mvw+d|f0hjdiLtRuckJoaX1jnvXWobmPqyOgmVwZim!&?BI-I(u zOHX!cp6tqPf$+v4ufb;;p*<#~fa~0>hEoMsHMNK$xR30#%fwLhYMV!rWR;r8vP-LK zk{z*eppprYSvlw7!_O*30#^QP#=HQ3Ay{;Ng)2O!ENR`I6=*Y`%y@Yz_4f z7?nX@xMl{)0m_R;Kpt{y2+%Llk_x}SJ$KcXq!l!OqXfCWN)!9gt0|kN2~|_K*c`C4 zfIGhi$UFwe4JyxP&W&@a8%VW_73E93X)>>QZ9a+k5GL^%H*=f@SE#J$xhU~~v_~+< z9)$>6Tnt?*8vlHV(gKVs=KjpX9f+s|Xr#ANx1u4Z9WOOGq`y=>)Z{blUK{T7*9Tb$ zv&ex?3*o@vcEv4i=?D#KNv>eeJq~M~6#3?u5lNyx3Nc#F^N2!ktTFCGWLDk@G=m89 z2{+!}>Ggk(hJ}2)NROea@~<*4g`$jet$G2nvN8`P2UOYv0Flgi`OvQ~?W#&IX+5aP zMkCLy2CCZ%T>PTp;r*sF73TVLV3PH@d7Nz2YW6t1#mmV}UY6Q+C@{VmP#ByMeE7C= zYtisqbs63np$6M~y&mR-Q%JwQ1Q*6=7ue4!x4ZlOJGtD3RfQgxg~b%Vnl2>oQq8!2 zSOU~oi6+$FrubWKeu0!d6C}h_*3f|zP>0%^17_-G^8G=&2~o2i?Pb`LIZ9`;WW&gzSGhsvkVH#{y}h&66V?gz#%+J);c&g^=dENLpx z+=0SVs^(HrLrU+828ChM7L4vRCmbT+rpAEH8x7a2pbBM=epmY-w|wh#<7B1==JgK{@p(Z+o#9G^z`N9_R-M`&1_nbY z&`f&+`98?|HxY>QN3_bx9s|}gFQgCH!o-hba+U`nhF`_9Z49S5-AQK)Eq=!H{rPp9W}JPAznBOut;}s+3liCGLypuil%ag&65n z-gkWLV<-?JFSG4;a1zt6u1nj`FE+z*0YBxZBGNNniqoh zm$VTWA{HN)+cm@XcX-B*MK+HDlN*oOm!NW@qg;0VnJGGnExE?fIn*)js){nJ%PV61 z;UP0r8v%J>AU3!;)h%OJbi|SQFRFM{o)MXgbnro2Zv?uCmVrE&-3+{>4Ymu@cFt++ zcHfg`5v>Lf4s5m}&Su5A{Cd3hF3QJ|A$JFj@CVgN?$ks*hh0`;B|EhFlfB2y7{RYCRt1W(F_4FYjy3qo1!7QJLdc7Xt;|_Du$&gvs zAiEr&>4Jdy`v@EFKY}qiIio=LXuLVAyb^`LW&VLo@_6(llm(2ntG(Un-r(96e+sxj zR3<7>629owD6-^v--U67;+5$a&W?_=5S66{zW-)cQI0r4UH|5zOC$-O7tq?)6{L+z zp`*D!w|GXU+lY{x2#R&R%hhGdl=QV}nkvXbL@LQA#57o8fEo*sf9SKSSh_URprELr zV9F)+^4z`-Aur#$?)>Lb6gLQiUqUf1f>yU@^v(K)b- z^cT*uM|r5*L-zjD9$*+9=HBAxaMJI4EAPUDS1oLE2qjVP>j|bU)fRnAkCxs`ZNV4> z;LDd+d9pIDt*SkPr8&H}b=#LxCDLW|2TF!i2)Cja9YLe-rDRpaY4-un;G!_+isst+y32o1uz3ynFO*cx4GDL^x%Fx_ zNEa#ArRvIe`iJ|vJ1&uu0IfvxrRqr{k*5tP@4h;8>bDW;=CM^|Wx@e4&Bw4dK1&Qf zNn?*#FD~5%Wgfg|+l_{C=a+{4BTeY9E;O}-Q3uodAn#0JYV={BjRo;X8DtT5D!@kS zEh}n8cl2D==6IAKE5Ys1@Hmxf=~rFLz??ygRRYkIdVUJDDGN{a2mE%}I5~fJI>0+> zv2B<92m6PUY78I}^CDKw5fY&0R|`8U@5@^?OD@fg7`{4?4{%os9g0GDD;c)Eu*69Y zI5w$&6vK(H5o>k$Ym^?8f};Ofc(w%xZ`k&Tii}6Vl>?nug8He#{TOnfD-(L-i$z=LygIz(R+Kl zHH&OM_PE4@wDcLoS9yA_Q-aGIg_?8Ni_$#K<(1-omS+KRTMc1mLFg3y6gYqrioI-; zYnEn1#GI(@)Y0l?tk(8ivwb9yM(GXtRBF!ls~TjT0{3+3B|+2HwJAJQ?`WytlqF8O zYCtEvRQ9*n~%d-<`0A%8(SmIMNxbcPAqkesw#%U_m5# z-Jt)9op)1&tKVLp3uxi3&#*V~bJB?n(%E#%OB8$Q;}~a3?xw%#f|%K!s-GLzJJ0v` ze_HC2lfcbjS6XW#|9Qjg^Uac|wv}|T>O-dL+qm!OlcVt<5xwqR^L@~PqY~Kj?L5jl z%DO^jW2l3xG%=jvF)HBR^x;IUSe5lTLC#v2lRGSrN43lEF08H<2bug8C!W&NeqWUT_AJ=9;teeA zyJtcD{C8N`C3pZ=*kD5u7{s!f+I#iQNm}QCj42afzjuDiJ|S61z&=2d$o!aq&@Xk{ z@#^k{np3#U<{`_X=57yGd4I;%$8j@;AQk?Rzps${4ypv{?|{(zgOvw93Ro0uuR0Bv zfA8f{2u)68l`J98SBrxO@~o~}+g^JbYUZDL3zDq?KB-h?bPR!blwXUWAOCpF z{=y{)h2g>%8D}atI1{7LCde{9`SE(lQyc{7}4t+#nZZ-;Rql-Ox6H=T{bd^7Y1%wU3gpl>k{McA~Dg~q1tXsTSv#ct2RAANV@qpx`o&}XjYCyCS~N+Z^;kCGy9gR-cA9|qfzL+0?21#Vvcp@QUOn9;`o3sOw~FpxS`ryDPnldz zpw&>NieDWWKw+K4?yAQU-C!@2=6=*N+B#Gj{H`j$++Tvb!nh+&p<)i`p1+Xl}PTJ{9rSq0zyP`&-K3G>n)L`-GAidnoJYpReWnJ zl=sYQPZk^&IxJQzfk*RZ*P;2?58X};@Ck``ei*1BleGhkj);v~Os#bKU|^pnSJ(ZJ z7d?Max#|+}DW+2Deb24OoET}-j(5cc%7AWU24zJzy%}JciYA`}=W>p8b-i{NJXh#9 zt`XyJMCjEAVNB5_TtA^2rJXYm1bO}qxT5&}iX53GIOJe>U>|(K2CO0=h`=w@D4qebU?;85`3#liF~B?j#DApdvG?tw6LUkPC~HAw0hXv%sh(9zki zt7O(s|3G@xr|Aa8mYKlN*-vgdYO=tT?h9LxTGf+@?b5U_Y&f*zu>8NWGt^uk!sq;6 z4wW^u=DVwDXzOFdOEorpEcF9~^?@d%4dm zc`MP}Blbd76O?EG3H@>#AcFH|XndQmegT#+K=wQPq1)V{#XB*ixx1+6HE1EiF9wq| z$-3CEd)_&_z@*Uc)y5}Wm)FNAqY@7&?%Wk_sj1Swrb8X6X}Phk?w5$6cGnLcGVb7? zK)e(GC3O>Lr9Q}qNZ|s0;X-w2(Vdy15UGz#40rM@e7Y33gukaHH03z*R_tX|WpPJ{ z6{tjjAt?))0z%<5En4MfyFW=ngjZY!hUZ?)4?%d&ClehDgT`@9!uC&Yxhk!s-7vQq zr|*7v)$ZggeSiV^fkV`3YzMQmN96cP)?J3Dbi^LW>?mRT_#T|s5Xwvyx2-+xNMg_> z%AYxk4-Va!#nxGQ9CbvUu&reld@-xLRp*Db`B)JNH%_i%?ba7k_UG1^huS@3t3ip| zA&P)&caKgi1#+QjFXDHB0mLW(pt{jk_it1Gn9ZrF9WZ7h$mMjIO(eB8B)Eb*R8EAc zJPS>>lCT-u!AXr!X_*{w8My9#$^e3`ZCTXp!G+}LWr~>Sps1SSlh3;Byd`sdtaA%Cuw3ITZbH$b&F5M=gEIR4k>zyuh& zW1QELn)UxQA>3TRrV<;QqpS$H#ivJT@EahHd zl!a$ZqgVRD^+>j=0N5TwV+%QU7SOo; z135#tUj24r}99z>FKGx`L~avR|d>x+xIrc38{OT9PWxFLPxc0ePhEkZy1~m7O-ymUc-~>Df(KFsAE)S3 zI<>G$nm-o_!D>9ur$h`t9dK)liVAi$@~$omCoG?&KeUJgyojZ2z8T3IJI$4Yo&Oy1Z_W?W0|` zITGgbqwe^~gNl2a&t6obH4Ph;6&5V`bvKx~mG$qeEedXejNA-`gIFAuMJe&pS&DUP zSH7wvMAy}XZsy@c1UjjX{(kP^{0${lsyp&`p7Mc2@mBSlA2=F7Wz)3k$v9NS8mnya zYao0ja|kSr6n_@+Wkm(+KNy|w>;o(cp-+N`fz<>cXU~S}D7~56I%Xk~pVVp1$t26jY{YxdtC@E+)qGu8WHllbY$i1PwzAD9-%O%8i&`@y5jZA+^sx0mW~oo zb4<9${1C5u$lC1v>3|C(H^5=$1|5fU(K@G-(dDvdeKAGo`^3?g*4&aeMg(*BDVteW zR5uj8XHjqVrS;kI{&Ciml>NUv2uEa$tR1We6fYbz7w`F%zu|*8Y5L-Dc@I(Ny0|=O z)l5y{u;?~isLKXjyCKBP`poqu8DV?&fLzH@JKdMhGgO8$^tBokht_TfjbVq&lkjOF z5|-QQf9>(icNdU_lzV?^U~{Hq(RC=Nu^q$)A4R52u(Hh=>x(-%#kp5=!hVf9MazxI zl(!NJ8-Kqy#$o%IxuJfP%0b&LW&IGdiNXyQJ|ZCmc6VL({hxm^vs8AZ?bq~Mp|~ny zgM^j-XRq4KSCyt$CdGL>VMV^bV+IZaleTLoa1pav`Ee>`U>Z#%VKtYUS*qb+<7pu@ z=|$uln)XWpSPn`ygdl$=2ah0)F<4E?=^zShstsDh&X;l6RaVm-%G5W6dm{CTRtOJ6@0a;SB8zC&s_g5?cy zEf1P`4T<0GrWCFM1)^IDCbKhmKy@W3X8pxMlWXFgLGB(*I>ic%?drL|wCLK0ldEyn ze6@3j9{g(i2b^~vXW&nrgF{W94U5Mm)@$nc?xT1bFqvC=da1Gyc`Wu=x>!3`_1DEu z)%roYpIU9~JW_5_d#Vd?JBytQ7Ry;0O6jg`_t@?OQHag=f(}J?Ez;6rA^pvWK^ zL_nm3Nv4Fz5JG?u5|X=5AefT&_CDN)KJ+;!=i7Vjwf3;~`v1;y;i%w-m1Q+Ce<`J9 z*ktn_Z3jhu!1aQCOwWu$Qw!Hul}qb1@Pa>aT>=I-+;5MyO}8*QarM3T>QRSG7v;r< zoCdW=zp#$&-2!G4RF)9rhw>I*qgV}_8%(E17`k+uPKcLGzVn0YQtQuW#TakNsFIX~xZ6d-j zuT(pJ_|iZV&I#6#4uO?@Gsts^SN2wLe{}if%su2m;d+o7GF~yNg1kT(HskGAsL}$@ z5kPV(WF{vE-wb>SwoX71x?=RYj%}xX$!)yt*#@u6hTyJ~3zwK{inC_Y`rIEe9?9;O zQ9ml{)?0@xs4|69Y~zNR`xjqEoi7W2Pj-Y;JFt#>sWTo3Wp6H;X*tMM{Vcq=u-BSl zL$y(?ELgB+0Hg`XJPutOs{pNf^HewXI6e~aX3T-#!}3K1uti?proTd*lKV5!wTzlo zMh`w;Ln^DYo-byVm4N@1>UcdUJo&wrRaW&qUsNkANS`NV{R6H1d`+#42L110jx|j; z72WmrNb7QCI&**JD7J@vpf;3|WR{i~KKw0uxExI_ztuzQr>U9!Y-HFmEhxbA3&sXN zT>UCN=+6}4hFdnct*uy}D%K8X%!!R<3{(2QiSsWS_oDT;=?0XH$A)rhO^6YC+cU3c zW@LM8R)*#IPG>muTKE!iY12(s6fE+V27JsA2!0v}*y&Gu|1Q`RAU*E7UP0(nH3id=JN;kn(cz1 zO)%2z?zWO$Lkl>+un$UgzV4e7(Z&y5R>GAIC)*~yjeGf^8Yq6u!l6%$Zu;B{#&A2O zb_2ndx)oVzW(sH`Ww=9?PbghPA4L-K_v4g|eY=+aV@@{=&WQ9YJFdV>4yDJXuxhr5 z*%`+^4+kid(9+Rp0Z6st`-`As>~NXFYiAgIu>i{mD(W%WbQs>EP_Fp!n{xR{B{yo!(%DK}>5 zW)VerzV^`G?he};q=ToBpV$V_-8p8Ds>GTE&v#WF(Df>> zu&M=>AdSh$jnd8bL^QWgE<+EFMHcljCrOK-cgs{Wg9<)rdf`TivyXaP0!^m)pBmmf$n_Zfx1@FF)uVr((Jpkp1 z;tW(<@=P5v#%RbhUUw!?Va%2?m^b$$kh{khZ)cQ;$N?PKaxLD;Ja1YlHk>xV{GE3V zGprOXZbxke!D2Z9m|d^WBtIc>lP_G2!_(Sw32R+v`wkFx3Gie#Rpi&4nG$Gv#-67>8aATl;xX2l%Pv<{DrnejDmy?>!RR{~pjAynPMN_UTX^Q)(n*+*AE zzpm9uax!G$>h^N7L{nKl0S~2v1Mnq9Fg1<02s#KUXHdG(25Il{Tn>lL?A0^!g*cf~s%`a=yPmMlzE6ff+mv44<99Nix=`krK&Rps7 z1K7Phzh-R@H2{XQ-0s$czvuSI_kK{tAk86VQ|)6Mv#3C$rf(?#c<{kn?% zCqT!uCV6GTW|0vqaV(wQoN7ajA7U?8hN|?zGLz@-d#1a)SUQZeKQXY}_PbI0RNIxb z(jj-jC@82Qx$*KM<1P-GQIYtm#@uFaw;6vEa}9ytht<|aJCD9d37P6=bSLRRQTlCZ zrvnfljM)BkxWhaj|H~r|hh>b(vn9fjA zf6X?(di`f$YS%GNY+YyK#amkTt66&&xMN1TkSFC+d03VSsL}InK&<*kYOYWlzzG`> z)5P5m@uhK>%46uS5m%#60g4Jq!&6cc5e-fU8dP~Vi-AHm(;95R?)G1(+g+4U(s2kF*V9qPT@-O9#2o%5eXwFsE zrT<{8g6{EPXD^4s5>$e1sE<``VDzW2wg&$a8f;G!L=&AtCz`r9;G6;w0Aujxsg4XqF@mE1lYuQk1&AK%VpxX&Z#P=?z&-7WSpr;zPwwQs4W?Gxx%k`&ZH=K15tk!N6NMy=9!s`P zlcE03)eH9&6*03T9fI(cdZk4uFRIzsJg1Ap%pBsr5Zpu#`9ayVfPj`lAje83Sj>|A-RUd@><%z!Yi#IlO0b|svz zvF-H5R?1htouycMLZ={=H~K+8zLP)Z$Z6H0yiaT`d#*S6k?GRCgL5T^!Cjo0!4@AJ z9u(uYJ?piiKSMs*QP7S{5ADB4pu)wG@nAz!FX&b+JBUwzqa)?@<#lO7u1MHf)5Js~ zrgK9_UdYw5@LilsdIFL|aj#X3A!gU3yMWvoo89%0uRi+vj+U|dxuO@CJ|Uk9P=>WR zb(Vf`dFBg2YuU8Zlv6u)#_}koM#(Bax51S_l_ZNEc;N3sv(M>vb++xmaE@?3I0bBy zbMSIjcX9HlKBg@>=F5J~Kth-kf2bfS_KTydC74JFrdGn1lbq4PNxbSB`OaA6^W(4K z!`9%+3ZpJ?-g3MoIXy`HYHo`t!Cip|dMaVyECB!ib(4BwIzOb7_g@+!d5IWa|J*u5 zs*RGL=5_h7_$vj0$NeN7mz;fNpT7lvDdC@{{$thuudnh1JLsZC+dcj5!?vYv247l% OtoI$=oB6H#5B~vjb6b1> literal 0 HcmV?d00001 diff --git a/tftrt/examples-cpp/image_classification/saved-model/image_classification_build.sh b/tftrt/examples-cpp/image_classification/saved-model/image_classification_build.sh new file mode 100755 index 000000000..38477247c --- /dev/null +++ b/tftrt/examples-cpp/image_classification/saved-model/image_classification_build.sh @@ -0,0 +1,37 @@ +#!/bin/bash +# Build the C++ TFTRT Example + +# Copyright 2019 NVIDIA Corporation. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== + +set -e +if [[ ! -f /opt/tensorflow/nvbuild.sh || ! -f /opt/tensorflow/nvbuildopts ]]; then + echo This TF-TRT example is intended to be executed in the NGC TensorFlow container environment. Get one with, e.g. `docker pull nvcr.io/nvidia/tensorflow:19.10-py3`. + exit 1 +fi + +# TODO: to programatically determine the python and tf API versions +PYVER=3.6 #TODO get this by parsing `python --version` +TFAPI=1 #TODO get this by parsing tf.__version__ + +/opt/tensorflow/nvbuild.sh --configonly --python$PYVER --v$TFAPI + +BUILD_OPTS="$(cat /opt/tensorflow/nvbuildopts)" +if [[ "$TFAPI" == "2" ]]; then + BUILD_OPTS="--config=v2 $BUILD_OPTS" +fi + +cd /opt/tensorflow/tensorflow-source +bazel build $BUILD_OPTS tensorflow/examples/image-classification/... diff --git a/tftrt/examples-cpp/image_classification/saved-model/main.cc b/tftrt/examples-cpp/image_classification/saved-model/main.cc new file mode 100755 index 000000000..a21e2f330 --- /dev/null +++ b/tftrt/examples-cpp/image_classification/saved-model/main.cc @@ -0,0 +1,464 @@ +/* Copyright 2021 NVIDIA Corporation. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +==============================================================================*/ + +/* Copyright 2015 The TensorFlow Authors. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +==============================================================================*/ + +// A minimal but useful C++ example showing how to load a TF-TRT ResNet-50 model, +// prepare input images for it, run them through the graph, and interpret the results. +// +// It's designed to have as few dependencies and be as clear as possible, so +// it's more verbose than it could be in production code. In particular, using +// auto for the types of a lot of the returned values from TensorFlow calls can +// remove a lot of boilerplate, but I find the explicit types useful in sample +// code to make it simple to look up the classes involved. +// +// To use it, compile and then run in a working directory with the +// learning/brain/tutorials/label_image/data/ folder below it, and you should +// see the top five labels for the example Lena image output. You can then +// customize it to use your own models or images by changing the file names at +// the top of the main() function. +// +// Note that, for GIF inputs, to reuse existing code, only single-frame ones +// are supported. + +#include +#include +#include +#include +#include +#include +#include + +#include "tensorflow/cc/ops/const_op.h" +#include "tensorflow/cc/ops/array_ops.h" +#include "tensorflow/cc/ops/image_ops.h" +#include "tensorflow/cc/ops/standard_ops.h" +#include "tensorflow/core/framework/graph.pb.h" +#include "tensorflow/core/framework/tensor.h" +#include "tensorflow/core/graph/default_device.h" +#include "tensorflow/core/graph/graph_def_builder.h" +#include "tensorflow/core/lib/core/errors.h" +#include "tensorflow/core/lib/core/stringpiece.h" +#include "tensorflow/core/lib/core/threadpool.h" +#include "tensorflow/core/lib/io/path.h" +#include "tensorflow/core/lib/strings/str_util.h" +#include "tensorflow/core/lib/strings/stringprintf.h" +#include "tensorflow/core/platform/env.h" +#include "tensorflow/core/platform/init_main.h" +#include "tensorflow/core/platform/logging.h" +#include "tensorflow/core/platform/types.h" +#include "tensorflow/core/public/session.h" +#include "tensorflow/core/util/command_line_flags.h" +#include "tensorflow/cc/saved_model/loader.h" +#include "tensorflow/core/framework/tensor.pb.h" +#include "tensorflow/core/lib/core/status.h" +#include "tensorflow/core/platform/init_main.h" + +#include "absl/strings/string_view.h" + +// These are all common classes it's handy to reference with no namespace. +using tensorflow::Flag; +using tensorflow::int32; +using tensorflow::Status; +using tensorflow::string; +using tensorflow::Tensor; +using tensorflow::tstring; + +// Returns the name of nodes listed in the signature definition. +std::vector +GetNodeNames(const google::protobuf::Map + &signature) { + std::vector names; + for (auto const &item : signature) { + absl::string_view name = item.second.name(); + // Remove tensor suffix like ":0". + size_t last_colon = name.find_last_of(':'); + if (last_colon != absl::string_view::npos) { + name.remove_suffix(name.size() - last_colon); + } + names.push_back(std::string(name)); + } + return names; +} + +// Loads a SavedModel from export_dir into the SavedModelBundle. +tensorflow::Status LoadModel(const std::string &export_dir, + tensorflow::SavedModelBundle *bundle, + std::vector *input_names, + std::vector *output_names) { + + tensorflow::RunOptions run_options; + TF_RETURN_IF_ERROR(tensorflow::LoadSavedModel(tensorflow::SessionOptions(), + run_options, export_dir, + {"serve"}, bundle)); + + // Print the signature defs. + auto signature_map = bundle->GetSignatures(); + for (const auto &name_and_signature_def : signature_map) { + const auto &name = name_and_signature_def.first; + const auto &signature_def = name_and_signature_def.second; + std::cerr << "Name: " << name << std::endl; + std::cerr << "SignatureDef: " << signature_def.DebugString() << std::endl; + } + + // Extract input and output tensor names from the signature def. + const tensorflow::SignatureDef &signature = signature_map["serving_default"]; + *input_names = GetNodeNames(signature.inputs()); + *output_names = GetNodeNames(signature.outputs()); + + return tensorflow::Status::OK(); +} + +// Takes a file name, and loads a list of labels from it, one per line, and +// returns a vector of the strings. It pads with empty strings so the length +// of the result is a multiple of 16, because our model expects that. +Status ReadLabelsFile(const string& file_name, std::vector* result, + size_t* found_label_count) { + std::ifstream file(file_name); + if (!file) { + return tensorflow::errors::NotFound("Labels file ", file_name, + " not found."); + } + result->clear(); + string line; + while (std::getline(file, line)) { + result->push_back(line); + } + *found_label_count = result->size(); + const int padding = 16; + while (result->size() % padding) { + result->emplace_back(); + } + return Status::OK(); +} + +static Status ReadEntireFile(tensorflow::Env* env, const string& filename, + Tensor* output) { + tensorflow::uint64 file_size = 0; + TF_RETURN_IF_ERROR(env->GetFileSize(filename, &file_size)); + + string contents; + contents.resize(file_size); + + std::unique_ptr file; + TF_RETURN_IF_ERROR(env->NewRandomAccessFile(filename, &file)); + + tensorflow::StringPiece data; + TF_RETURN_IF_ERROR(file->Read(0, file_size, &data, &(contents)[0])); + if (data.size() != file_size) { + return tensorflow::errors::DataLoss("Truncated read of '", filename, + "' expected ", file_size, " got ", + data.size()); + } + output->scalar()() = tstring(data); + return Status::OK(); +} + +// Given an image file name, read in the data, try to decode it as an image, +// resize it to the requested size, and then scale the values as desired. +Status ReadTensorFromImageFile(const string& file_name, const int input_height, + const int input_width, const float input_mean, + const float input_std, + std::vector* out_tensors) { + auto root = tensorflow::Scope::NewRootScope(); + using namespace ::tensorflow::ops; // NOLINT(build/namespaces) + + string input_name = "file_reader"; + string output_name = "normalized"; + + // read file_name into a tensor named input + Tensor input(tensorflow::DT_STRING, tensorflow::TensorShape()); + TF_RETURN_IF_ERROR( + ReadEntireFile(tensorflow::Env::Default(), file_name, &input)); + + // use a placeholder to read input data + auto file_reader = + Placeholder(root.WithOpName("input"), tensorflow::DataType::DT_STRING); + + std::vector> inputs = { + {"input", input}, + }; + + // Now try to figure out what kind of file it is and decode it. + const int wanted_channels = 3; + tensorflow::Output image_reader; + if (tensorflow::str_util::EndsWith(file_name, ".png")) { + image_reader = DecodePng(root.WithOpName("png_reader"), file_reader, + DecodePng::Channels(wanted_channels)); + } else if (tensorflow::str_util::EndsWith(file_name, ".gif")) { + // gif decoder returns 4-D tensor, remove the first dim + image_reader = + Squeeze(root.WithOpName("squeeze_first_dim"), + DecodeGif(root.WithOpName("gif_reader"), file_reader)); + } else if (tensorflow::str_util::EndsWith(file_name, ".bmp")) { + image_reader = DecodeBmp(root.WithOpName("bmp_reader"), file_reader); + } else { + // Assume if it's neither a PNG nor a GIF then it must be a JPEG. + image_reader = DecodeJpeg(root.WithOpName("jpeg_reader"), file_reader, + DecodeJpeg::Channels(wanted_channels)); + } + // Now cast the image data to float so we can do normal math on it. + auto float_caster = + Cast(root.WithOpName("float_caster"), image_reader, tensorflow::DT_FLOAT); + // The convention for image ops in TensorFlow is that all images are expected + // to be in batches, so that they're four-dimensional arrays with indices of + // [batch, height, width, channel]. Because we only have a single image, we + // have to add a batch dimension of 1 to the start with ExpandDims(). + auto dims_expander = ExpandDims(root, float_caster, 0); + // Bilinearly resize the image to fit the required dimensions. + auto resized = ResizeBilinear( + root, dims_expander, + Const(root.WithOpName("size"), {input_height, input_width})); + + // Preprocess image in "caffe" style: https://github.com/keras-team/keras/blob/d8fcb9d4d4dad45080ecfdd575483653028f8eda/keras/applications/imagenet_utils.py#L206 + // Convert channel from RGB -> BGR + auto unstack_image_node = tensorflow::ops::Unstack(root.WithOpName("unstack_image"), resized, 3, tensorflow::ops::Unstack::Attrs().Axis(3)); + auto stacked_image_node = tensorflow::ops::Stack(root.WithOpName("stacked_image"), {unstack_image_node[2],unstack_image_node[1],unstack_image_node[0]}, tensorflow::ops::Stack::Attrs().Axis(3)); + + + // Substract mean: BGR + std::vector vec = {103.939, 116.779, 123.68}; + Tensor img_mean(tensorflow::DT_FLOAT, {3}); + std::copy_n(vec.begin(), vec.size(), img_mean.flat().data()); + + Div(root.WithOpName(output_name), Sub(root, stacked_image_node, img_mean), + {input_std}); + + // This runs the GraphDef network definition that we've just constructed, and + // returns the results in the output tensor. + tensorflow::GraphDef graph; + TF_RETURN_IF_ERROR(root.ToGraphDef(&graph)); + + std::unique_ptr session( + tensorflow::NewSession(tensorflow::SessionOptions())); + TF_RETURN_IF_ERROR(session->Create(graph)); + TF_RETURN_IF_ERROR(session->Run({inputs}, {output_name}, {}, out_tensors)); + return Status::OK(); +} + +// Reads a model graph definition from disk, and creates a session object you +// can use to run it. +Status LoadGraph(const string& graph_file_name, + std::unique_ptr* session) { + tensorflow::GraphDef graph_def; + Status load_graph_status = + ReadBinaryProto(tensorflow::Env::Default(), graph_file_name, &graph_def); + if (!load_graph_status.ok()) { + return tensorflow::errors::NotFound("Failed to load compute graph at '", + graph_file_name, "'"); + } + session->reset(tensorflow::NewSession(tensorflow::SessionOptions())); + Status session_create_status = (*session)->Create(graph_def); + if (!session_create_status.ok()) { + return session_create_status; + } + return Status::OK(); +} + +// Analyzes the output of the Inception graph to retrieve the highest scores and +// their positions in the tensor, which correspond to categories. +Status GetTopLabels(const std::vector& outputs, int how_many_labels, + Tensor* indices, Tensor* scores) { + auto root = tensorflow::Scope::NewRootScope(); + using namespace ::tensorflow::ops; // NOLINT(build/namespaces) + + string output_name = "top_k"; + TopK(root.WithOpName(output_name), outputs[0], how_many_labels); + // This runs the GraphDef network definition that we've just constructed, and + // returns the results in the output tensors. + tensorflow::GraphDef graph; + TF_RETURN_IF_ERROR(root.ToGraphDef(&graph)); + + std::unique_ptr session( + tensorflow::NewSession(tensorflow::SessionOptions())); + TF_RETURN_IF_ERROR(session->Create(graph)); + // The TopK node returns two outputs, the scores and their original indices, + // so we have to append :0 and :1 to specify them both. + std::vector out_tensors; + TF_RETURN_IF_ERROR(session->Run({}, {output_name + ":0", output_name + ":1"}, + {}, &out_tensors)); + *scores = out_tensors[0]; + *indices = out_tensors[1]; + return Status::OK(); +} + +// Given the output of a model run, and the name of a file containing the labels +// this prints out the top five highest-scoring values. +Status PrintTopLabels(const std::vector& outputs, + const string& labels_file_name) { + std::vector labels; + size_t label_count; + Status read_labels_status = + ReadLabelsFile(labels_file_name, &labels, &label_count); + if (!read_labels_status.ok()) { + LOG(ERROR) << read_labels_status; + return read_labels_status; + } + const int how_many_labels = std::min(5, static_cast(label_count)); + Tensor indices; + Tensor scores; + TF_RETURN_IF_ERROR(GetTopLabels(outputs, how_many_labels, &indices, &scores)); + tensorflow::TTypes::Flat scores_flat = scores.flat(); + tensorflow::TTypes::Flat indices_flat = indices.flat(); + for (int pos = 0; pos < how_many_labels; ++pos) { + const int label_index = indices_flat(pos); + const float score = scores_flat(pos); + LOG(INFO) << labels[label_index] << " (" << label_index << "): " << score; + } + return Status::OK(); +} + +// This is a testing function that returns whether the top label index is the +// one that's expected. +Status CheckTopLabel(const std::vector& outputs, int expected, + bool* is_expected) { + *is_expected = false; + Tensor indices; + Tensor scores; + const int how_many_labels = 1; + TF_RETURN_IF_ERROR(GetTopLabels(outputs, how_many_labels, &indices, &scores)); + tensorflow::TTypes::Flat indices_flat = indices.flat(); + if (indices_flat(0) != expected) { + LOG(ERROR) << "Expected label #" << expected << " but got #" + << indices_flat(0); + *is_expected = false; + } else { + *is_expected = true; + } + return Status::OK(); +} + +int main(int argc, char* argv[]) { + // These are the command-line flags the program can understand. + // They define where the graph and input data is located, and what kind of + // input the model expects. + string image = "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/saved-model/data/img0.JPG"; + string export_dir = + "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/saved-model/resnet50_saved_model_TFTRT_FP32_frozen"; + string labels = + "/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/saved-model/data/imagenet_slim_labels.txt"; + int32_t input_width = 224; + int32_t input_height = 224; + float input_mean = 127; + float input_std = 1; + bool self_test = false; + string root_dir = ""; + + std::vector flag_list = { + Flag("image", &image, "image to be processed"), + Flag("export_dir", &export_dir, "frozen TF-TRT saved model to be executed"), + Flag("labels", &labels, "name of file containing labels"), + Flag("input_width", &input_width, "resize image to this width in pixels"), + Flag("input_height", &input_height, + "resize image to this height in pixels"), + Flag("input_mean", &input_mean, "scale pixel values to this mean"), + Flag("input_std", &input_std, "scale pixel values to this std deviation"), + Flag("self_test", &self_test, "run a self test"), + Flag("root_dir", &root_dir, + "interpret image and graph file names relative to this directory"), + }; + string usage = tensorflow::Flags::Usage(argv[0], flag_list); + const bool parse_result = tensorflow::Flags::Parse(&argc, argv, flag_list); + if (!parse_result) { + LOG(ERROR) << usage; + return -1; + } + + // We need to call this to set up global state for TensorFlow. + tensorflow::port::InitMain(argv[0], &argc, &argv); + if (argc > 1) { + LOG(ERROR) << "Unknown argument " << argv[1] << "\n" << usage; + return -1; + } + + tensorflow::SavedModelBundle bundle; + std::vector input_names; + std::vector output_names; + + // Load the saved model from the provided path. + Status load_graph_status = LoadModel(export_dir, &bundle, &input_names, &output_names); + if (!load_graph_status.ok()) { + LOG(ERROR) << load_graph_status; + return -1; + } + + auto sig_map = bundle.GetSignatures(); + auto model_def = sig_map.at("serving_default"); + + printf("Model Signature"); + for (auto const& p : sig_map) { + printf("key: %s", p.first.c_str()); + } + + printf("Model Input Nodes"); + for (auto const& p : model_def.inputs()) { + printf("key: %s value: %s", p.first.c_str(), p.second.name().c_str()); + } + + printf("Model Output Nodes"); + for (auto const& p : model_def.outputs()) { + printf("key: %s value: %s", p.first.c_str(), p.second.name().c_str()); + } + + auto input_name = model_def.inputs().at("input_2").name(); + auto output_name = model_def.outputs().at("output_0").name(); + + // Get the image from disk as a float array of numbers, resized and normalized + // to the specifications the main graph expects. + std::vector resized_tensors; + string image_path = tensorflow::io::JoinPath(root_dir, image); + Status read_tensor_status = + ReadTensorFromImageFile(image_path, input_height, input_width, input_mean, + input_std, &resized_tensors); + if (!read_tensor_status.ok()) { + LOG(ERROR) << read_tensor_status; + return -1; + } + const Tensor& resized_tensor = resized_tensors[0]; + + // Actually run the image through the model. + std::vector outputs; + + // fill the input tensors with data + tensorflow::Status status; + status = bundle.session->Run({ {input_name, resized_tensor}}, + {output_name}, {}, &outputs); + if (!status.ok()) { + std::cerr << "Inference failed: " << status; + return -1; + } + + // Do something interesting with the results we've generated. + Status print_status = PrintTopLabels(outputs, labels); + if (!print_status.ok()) { + LOG(ERROR) << "Running print failed: " << print_status; + return -1; + } + + return 0; +} diff --git a/tftrt/examples-cpp/image_classification/saved-model/tftrt-build.sh b/tftrt/examples-cpp/image_classification/saved-model/tftrt-build.sh new file mode 100755 index 000000000..2d4604aa3 --- /dev/null +++ b/tftrt/examples-cpp/image_classification/saved-model/tftrt-build.sh @@ -0,0 +1,13 @@ +# TODO: to programatically determine the python and tf API versions +PYVER=3.8 #TODO get this by parsing `python --version` +TFAPI=2 #TODO get this by parsing tf.__version__ + +/opt/tensorflow/nvbuild.sh --configonly --python$PYVER --v$TFAPI + +BUILD_OPTS="$(cat /opt/tensorflow/nvbuildopts)" +if [[ "$TFAPI" == "2" ]]; then + BUILD_OPTS="--config=v2 $BUILD_OPTS" +fi + +cd tensorflow-source +bazel build $BUILD_OPTS tensorflow/examples/image_classification/... diff --git a/tftrt/examples-cpp/image_classification/saved-model/tftrt-conversion.ipynb b/tftrt/examples-cpp/image_classification/saved-model/tftrt-conversion.ipynb new file mode 100755 index 000000000..4c3eb5f28 --- /dev/null +++ b/tftrt/examples-cpp/image_classification/saved-model/tftrt-conversion.ipynb @@ -0,0 +1,699 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "dR1W9kv7IPhE" + }, + "outputs": [], + "source": [ + "# Copyright 2021 NVIDIA Corporation. All Rights Reserved.\n", + "\n", + "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "\n", + "# http://www.apache.org/licenses/LICENSE-2.0\n", + "\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License.\n", + "# ==============================================================================" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "Yb3TdMZAkVNq" + }, + "source": [ + "\n", + "\n", + "# TensorFlow C++ Inference with TF-TRT Models\n", + "\n", + "\n", + "## Introduction\n", + "In this notebook, we will download a pretrained Keras ResNet-50 model, optimize it with TF-TRT, convert it to a frozen graph, then load and do inference with the TensorFlow C++ API.\n", + "\n", + "First, we download the image net labels." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!mkdir data\n", + "!curl -L \"https://storage.googleapis.com/download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz\" | tar -C ./data -xz\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "8Fg4x4aomCY4", + "scrolled": true + }, + "outputs": [], + "source": [ + "!nvidia-smi" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "LG4IBNn-2PWY" + }, + "source": [ + "### Install Dependencies" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "!pip install pillow matplotlib" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 35 + }, + "colab_type": "code", + "id": "v0mfnfqg3ned", + "outputId": "11c043a0-b8e5-49e2-f907-5f1372c92a68", + "scrolled": true + }, + "outputs": [], + "source": [ + "import tensorflow as tf\n", + "print(\"Tensorflow version: \", tf.version.VERSION)\n", + "\n", + "# check TensorRT version\n", + "print(\"TensorRT version: \")\n", + "!dpkg -l | grep nvinfer" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "9U8b2394CZRu" + }, + "source": [ + "An available TensorRT installation looks like:\n", + "\n", + "```\n", + "TensorRT version: \n", + "ii libnvinfer8 8.2.2-1+cuda11.4 amd64 TensorRT runtime libraries\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "nWYufTjPCMgW" + }, + "source": [ + "### Importing required libraries" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "Yyzwxjlm37jx" + }, + "outputs": [], + "source": [ + "from __future__ import absolute_import, division, print_function, unicode_literals\n", + "import os\n", + "import time\n", + "\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "\n", + "import tensorflow as tf\n", + "from tensorflow import keras\n", + "from tensorflow.python.compiler.tensorrt import trt_convert as trt\n", + "from tensorflow.python.saved_model import tag_constants\n", + "from tensorflow.keras.applications.resnet50 import ResNet50\n", + "from tensorflow.keras.preprocessing import image\n", + "from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "v-R2iN4akVOi" + }, + "source": [ + "## Data\n", + "We download several random images for testing from the Internet." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "tVJ2-8rokVOl", + "scrolled": true + }, + "outputs": [], + "source": [ + "!mkdir ./data\n", + "!wget -O ./data/img0.JPG \"https://d17fnq9dkz9hgj.cloudfront.net/breed-uploads/2018/08/siberian-husky-detail.jpg?bust=1535566590&width=630\"\n", + "!wget -O ./data/img1.JPG \"https://www.hakaimagazine.com/wp-content/uploads/header-gulf-birds.jpg\"\n", + "!wget -O ./data/img2.JPG \"https://www.artis.nl/media/filer_public_thumbnails/filer_public/00/f1/00f1b6db-fbed-4fef-9ab0-84e944ff11f8/chimpansee_amber_r_1920x1080.jpg__1920x1080_q85_subject_location-923%2C365_subsampling-2.jpg\"\n", + "!wget -O ./data/img3.JPG \"https://www.familyhandyman.com/wp-content/uploads/2018/09/How-to-Avoid-Snakes-Slithering-Up-Your-Toilet-shutterstock_780480850.jpg\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 269 + }, + "colab_type": "code", + "id": "F_9n-AR1kVOv", + "outputId": "e0ead6dc-e761-404e-a030-f6d3057a57da" + }, + "outputs": [], + "source": [ + "from tensorflow.keras.preprocessing import image\n", + "\n", + "fig, axes = plt.subplots(nrows=2, ncols=2)\n", + "\n", + "for i in range(4):\n", + " img_path = './data/img%d.JPG'%i\n", + " img = image.load_img(img_path, target_size=(224, 224), interpolation='bilinear')\n", + " plt.subplot(2,2,i+1)\n", + " plt.imshow(img);\n", + " plt.axis('off');" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "xeV4r2YTkVO1" + }, + "source": [ + "## Model\n", + "\n", + "We next download and test a ResNet-50 pre-trained model from the Keras model zoo." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 73 + }, + "colab_type": "code", + "id": "WwRBOikEkVO3", + "outputId": "2d63bc46-8bac-492f-b519-9ae5f19176bc" + }, + "outputs": [], + "source": [ + "model = ResNet50(weights='imagenet')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 410 + }, + "colab_type": "code", + "id": "lFKQPoLO_ikd", + "outputId": "c0b93de8-c94b-4977-992e-c780e12a3d52" + }, + "outputs": [], + "source": [ + "for i in range(4):\n", + " img_path = './data/img%d.JPG'%i\n", + " img = image.load_img(img_path, target_size=(224, 224),interpolation='bilinear')\n", + " x = image.img_to_array(img)\n", + " x = np.expand_dims(x, axis=0)\n", + " x = preprocess_input(x)\n", + "\n", + " preds = model.predict(x)\n", + " # decode the results into a list of tuples (class, description, probability)\n", + " # (one such list for each sample in the batch)\n", + " print('{} - Predicted: {}'.format(img_path, decode_predictions(preds, top=3)[0]))\n", + "\n", + " plt.subplot(2,2,i+1)\n", + " plt.imshow(img);\n", + " plt.axis('off');\n", + " plt.title(decode_predictions(preds, top=3)[0][0][1])\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "XrL3FEcdkVPA" + }, + "source": [ + "TF-TRT takes input as a TensorFlow saved model, therefore, we re-export the Keras model as a TF saved model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 110 + }, + "colab_type": "code", + "id": "WxlUF3rlkVPH", + "outputId": "9f3864e7-f211-4c06-d2d2-585c1a477e34" + }, + "outputs": [], + "source": [ + "# Save the entire model as a SavedModel.\n", + "model.save('resnet50_saved_model') " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 453 + }, + "colab_type": "code", + "id": "RBu2RKs6kVPP", + "outputId": "8e063261-7efb-47fd-fa6c-1bb5076d418c" + }, + "outputs": [], + "source": [ + "!saved_model_cli show --all --dir resnet50_saved_model" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "qBQwBvlNm-J8" + }, + "source": [ + "### Inference with native TF2.0 saved model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "8zLN0GMCkVPe" + }, + "outputs": [], + "source": [ + "model = tf.keras.models.load_model('resnet50_saved_model')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 219 + }, + "colab_type": "code", + "id": "Fbj-UEOxkVPs", + "outputId": "3a2b34f9-8034-48cb-b3fe-477f09966025" + }, + "outputs": [], + "source": [ + "img_path = './data/img0.JPG' # Siberian_husky\n", + "img = image.load_img(img_path, target_size=(224, 224))\n", + "x = image.img_to_array(img)\n", + "x = np.expand_dims(x, axis=0)\n", + "x = preprocess_input(x)\n", + "\n", + "preds = model.predict(x)\n", + "# decode the results into a list of tuples (class, description, probability)\n", + "# (one such list for each sample in the batch)\n", + "print('{} - Predicted: {}'.format(img_path, decode_predictions(preds, top=3)[0]))\n", + "plt.subplot(2,2,1)\n", + "plt.imshow(img);\n", + "plt.axis('off');\n", + "plt.title(decode_predictions(preds, top=3)[0][0][1])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 35 + }, + "colab_type": "code", + "id": "CGc-dC6DvwRP", + "outputId": "e0a22e05-f4fe-47b6-93e8-2b806bf7098a" + }, + "outputs": [], + "source": [ + "batch_size = 1\n", + "batched_input = np.zeros((batch_size, 224, 224, 3), dtype=np.float32)\n", + "\n", + "for i in range(batch_size):\n", + " img_path = './data/img%d.JPG' % (i % 4)\n", + " img = image.load_img(img_path, target_size=(224, 224))\n", + " x = image.img_to_array(img)\n", + " x = np.expand_dims(x, axis=0)\n", + " x = preprocess_input(x)\n", + " batched_input[i, :] = x\n", + "batched_input = tf.constant(batched_input)\n", + "print('batched_input shape: ', batched_input.shape)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "rFBV6hQR7N3z" + }, + "outputs": [], + "source": [ + "# Benchmarking throughput\n", + "N_warmup_run = 50\n", + "N_run = 1000\n", + "elapsed_time = []\n", + "\n", + "for i in range(N_warmup_run):\n", + " preds = model.predict(batched_input)\n", + "\n", + "for i in range(N_run):\n", + " start_time = time.time()\n", + " preds = model.predict(batched_input)\n", + " end_time = time.time()\n", + " elapsed_time = np.append(elapsed_time, end_time - start_time)\n", + " if i % 50 == 0:\n", + " print('Step {}: {:4.1f}ms'.format(i, (elapsed_time[-50:].mean()) * 1000))\n", + "\n", + "print('Throughput: {:.0f} images/s'.format(N_run * batch_size / elapsed_time.sum()))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "vC_RN0BAkVPy" + }, + "source": [ + "### TF-TRT FP32 model\n", + "\n", + "We next convert the TF native FP32 model to a TF-TRT FP32 model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 126 + }, + "colab_type": "code", + "id": "0eLImSJ-kVPz", + "outputId": "e2c353c7-8e4b-49aa-ab97-f4d82797d4d8" + }, + "outputs": [], + "source": [ + "print('Converting to TF-TRT FP32...')\n", + "conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(precision_mode=trt.TrtPrecisionMode.FP32,\n", + " max_workspace_size_bytes=8000000000)\n", + "\n", + "converter = trt.TrtGraphConverterV2(input_saved_model_dir='resnet50_saved_model',\n", + " conversion_params=conversion_params)\n", + "converter.convert()\n", + "converter.save(output_saved_model_dir='resnet50_saved_model_TFTRT_FP32')\n", + "print('Done Converting to TF-TRT FP32')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 453 + }, + "colab_type": "code", + "id": "dlue_3npkVQC", + "outputId": "4dd6a366-fe9a-43c8-aad0-dd357bba41bb" + }, + "outputs": [], + "source": [ + "!saved_model_cli show --all --dir resnet50_saved_model_TFTRT_FP32" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "Vd2DoGUp8ivj" + }, + "source": [ + "Next, we load and test the TF-TRT FP32 model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2\n", + "import tensorflow as tf\n", + "\n", + "model = tf.saved_model.load(\"resnet50_saved_model_TFTRT_FP32\", tags=[tag_constants.SERVING]).signatures['serving_default']" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "rf97K_rxvwRm" + }, + "outputs": [], + "source": [ + "def predict_tftrt(input_saved_model):\n", + " \"\"\"Runs prediction on a single image and shows the result.\n", + " input_saved_model (string): Name of the input model stored in the current dir\n", + " \"\"\"\n", + " img_path = './data/img0.JPG' # Siberian_husky\n", + " img = image.load_img(img_path, target_size=(224, 224))\n", + " x = image.img_to_array(img)\n", + " x = np.expand_dims(x, axis=0)\n", + " x = preprocess_input(x)\n", + " x = tf.constant(x)\n", + " \n", + " saved_model_loaded = tf.saved_model.load(input_saved_model, tags=[tag_constants.SERVING])\n", + " signature_keys = list(saved_model_loaded.signatures.keys())\n", + " print(signature_keys)\n", + "\n", + " infer = saved_model_loaded.signatures['serving_default']\n", + " print(infer.structured_outputs)\n", + "\n", + " labeling = infer(x)\n", + " preds = labeling['predictions'].numpy()\n", + " print('{} - Predicted: {}'.format(img_path, decode_predictions(preds, top=3)[0]))\n", + " plt.subplot(2,2,1)\n", + " plt.imshow(img);\n", + " plt.axis('off');\n", + " plt.title(decode_predictions(preds, top=3)[0][0][1])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/", + "height": 238 + }, + "colab_type": "code", + "id": "pRK0pRE-snvb", + "outputId": "1f7ab6c1-dbfa-4e3e-a21d-df9975c70455" + }, + "outputs": [], + "source": [ + "predict_tftrt('resnet50_saved_model_TFTRT_FP32')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "z9b5j6jMvwRt" + }, + "outputs": [], + "source": [ + "def benchmark_tftrt(input_saved_model):\n", + " saved_model_loaded = tf.saved_model.load(input_saved_model, tags=[tag_constants.SERVING])\n", + " infer = saved_model_loaded.signatures['serving_default']\n", + "\n", + " N_warmup_run = 50\n", + " N_run = 1000\n", + " elapsed_time = []\n", + "\n", + " for i in range(N_warmup_run):\n", + " labeling = infer(batched_input)\n", + "\n", + " for i in range(N_run):\n", + " start_time = time.time()\n", + " labeling = infer(batched_input)\n", + " end_time = time.time()\n", + " elapsed_time = np.append(elapsed_time, end_time - start_time)\n", + " if i % 50 == 0:\n", + " print('Step {}: {:4.1f}ms'.format(i, (elapsed_time[-50:].mean()) * 1000))\n", + "\n", + " print('Throughput: {:.0f} images/s'.format(N_run * batch_size / elapsed_time.sum()))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "ai6bxNcNszHc" + }, + "outputs": [], + "source": [ + "benchmark_tftrt('resnet50_saved_model_TFTRT_FP32')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prepare model for C++ inference\n", + "\n", + "We can see that the TF-TRT FP32 model provide great speedup over the native Keras model. Now let's prepare this model for C++ inference. We will freeze this graph and write it as a frozen saved model to disk." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": {}, + "colab_type": "code", + "id": "2mM9D3BTEzQS" + }, + "outputs": [], + "source": [ + "from tensorflow.python.saved_model import signature_constants\n", + "from tensorflow.python.saved_model import tag_constants\n", + "from tensorflow.python.framework import convert_to_constants\n", + "\n", + "def get_func_from_saved_model(saved_model_dir):\n", + " saved_model_loaded = tf.saved_model.load(\n", + " saved_model_dir, tags=[tag_constants.SERVING])\n", + " graph_func = saved_model_loaded.signatures[\n", + " signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]\n", + " return graph_func, saved_model_loaded\n", + "\n", + "func, loaded_model = get_func_from_saved_model('resnet50_saved_model_TFTRT_FP32')\n", + "\n", + "# Create frozen func\n", + "frozen_func = convert_to_constants.convert_variables_to_constants_v2(func)\n", + "module = tf.Module()\n", + "module.myfunc = frozen_func\n", + "tf.saved_model.save(module,'resnet50_saved_model_TFTRT_FP32_frozen', signatures=frozen_func)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "colab_type": "text", + "id": "I13snJ9VkVQh" + }, + "source": [ + "### What's next\n", + "Refer back to the [Readme](README.md) to load the TF-TRT frozen saved model for inference with the TensorFlow C++ API." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "include_colab_link": true, + "machine_shape": "hm", + "name": "Colab-TF20-TF-TRT-inference-from-Keras-saved-model.ipynb", + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.10" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} From ec171c5853f76082080a16adf39f49e55209021a Mon Sep 17 00:00:00 2001 From: vinhn Date: Thu, 28 Apr 2022 21:59:16 -0700 Subject: [PATCH 3/4] fix master readme --- .../image_classification/README.md | 99 +------------------ 1 file changed, 3 insertions(+), 96 deletions(-) diff --git a/tftrt/examples-cpp/image_classification/README.md b/tftrt/examples-cpp/image_classification/README.md index 6b3e8ba4f..eec06e5d5 100755 --- a/tftrt/examples-cpp/image_classification/README.md +++ b/tftrt/examples-cpp/image_classification/README.md @@ -3,104 +3,11 @@ # TF-TRT C++ Image Recognition Demo -This example shows how you can load a native TF Keras ResNet-50 model, convert it to a TF-TRT optimized model (via the TF-TRT Python API), save the model as a frozen graph, and then finally load and serve the model with the TF C++ API. The process can be demonstrated with the below workflow diagram: +This example shows how you can load a native TF Keras ResNet-50 model, convert it to a TF-TRT optimized model (via the TF-TRT Python API), save the model as either a frozen graph or a saved model, and then finally load and serve the model with the TF C++ API. The process can be demonstrated with the below workflow diagram: -![TF-TRT C++ Inference workflow](TF-TRT_CPP_inference.png "TF-TRT C++ Inference") +![TF-TRT C++ Inference workflow](TF-TRT_CPP_inference_overview.png "TF-TRT C++ Inference") This example is built based upon the original Google's TensorFlow C++ image classification [example](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/label_image), on top of which we added the TF-TRT conversion part and adapted the C++ code for loading and inferencing with the TF-TRT model. -## Docker environment -Docker images provide a convinient and repeatable environment for experimentation. This workflow was tested in the NVIDIA NGC TensorFlow 22.01 docker container that comes with a TensorFlow 2.x build. Tools required for building this example, such as Bazel, NVIDIA CUDA, CUDNN, NCCL libraries are all readily setup. - -To replecate the below steps, start by pulling the NGC TF container: - -``` -docker pull nvcr.io/nvidia/tensorflow:22.01-tf2-py3 -``` -Then start the container with nvidia-docker: - -``` -nvidia-docker run --rm -it -p 8888:8888 --name TFTRT_CPP nvcr.io/nvidia/tensorflow:22.01-tf2-py3 -``` - -You will land at `/workspace` within the docker container. Clone the TF-TRT example repository with: - -``` -git clone https://github.com/tensorflow/tensorrt -cd tensorrt -``` - -Then copy the content of this C++ example directory to the TensorFlow example source directory: - -``` -cp -r ./tftrt/examples-cpp/image_classification/ /opt/tensorflow/tensorflow-source/tensorflow/examples/ -cd /opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification -``` - - -## Convert to TF-TRT Model - -Start Jupyter lab with: - -``` -jupyter lab -ip 0.0.0.0 -``` - -A Jupyter notebook for downloading the Keras ResNet-50 model and TF-TRT conversion is provided in `tf-trt-conversion.ipynb` for your experimentation. By default, this notebook will produce a TF-TRT FP32 model at `/opt/tensorflow/tensorflow-source/tensorflow/examples/image-classification/frozen_models_trt_fp32/frozen_models_trt_fp32.pb`. - -As part of the conversion, the notebook will also carry out benchmarking and print out the throughput statistics. - - - - -## Build the C++ example -The NVIDIA NGC container should have everything you need to run this example installed already. - -To build it, first, you need to copy the build scripts `tftrt_build.sh` to `/opt/tensorflow`: - -``` -cp tftrt-build.sh /opt/tensorflow -``` - -Then from `/opt/tensorflow`, run the build command: - -```bash -cd /opt/tensorflow -bash ./tftrt-build.sh -``` - -That should build a binary executable `tftrt_label_image` that you can then run like this: - -```bash -tensorflow-source/bazel-bin/tensorflow/examples/image_classification/tftrt_label_image \ ---graph=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/frozen_models_trt_fp32/frozen_models_trt_fp32.pb \ ---image=/opt/tensorflow/tensorflow-source/tensorflow/examples/image_classification/data/img0.JPG -``` - -This uses the default image `img0.JPG` which was download as part of the conversion notebook, and should -output something similar to this: - -``` -2022-02-23 13:53:56.076348: I tensorflow/examples/image-classification/main.cc:276] malamute (250): 0.575496 -2022-02-23 13:53:56.076384: I tensorflow/examples/image-classification/main.cc:276] Saint Bernard (248): 0.399285 -2022-02-23 13:53:56.076412: I tensorflow/examples/image-classification/main.cc:276] Eskimo dog (249): 0.0228338 -2022-02-23 13:53:56.076423: I tensorflow/examples/image-classification/main.cc:276] Ibizan hound (174): 0.00127912 -2022-02-23 13:53:56.076449: I tensorflow/examples/image-classification/main.cc:276] Mexican hairless (269): 0.000520922 -``` - -The program will also benchmark and output the throughput. Observe the improved throughput offered by moving from Python to C++ serving. - -Next, try it out on your own images by supplying the --image= argument, e.g. - -```bash -tensorflow-source/bazel-bin/tensorflow/examples/label_image/tftrt_label_image --image=my_image.png -``` - -## What's next - -Try to build TF-TRT FP16 and INT8 models and test on your own data, and serve them with C++. - -```bash - -``` +See the respective sub-folder for details on either approach. \ No newline at end of file From 0ae220d4b0432842d5ac7387084eb1abaf0aa94c Mon Sep 17 00:00:00 2001 From: vinhn Date: Thu, 26 May 2022 19:33:40 -0700 Subject: [PATCH 4/4] adding notebook tracker --- tftrt/examples-cpp/image_classification/frozen-graph/README.md | 2 +- tftrt/examples-cpp/image_classification/saved-model/README.md | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/tftrt/examples-cpp/image_classification/frozen-graph/README.md b/tftrt/examples-cpp/image_classification/frozen-graph/README.md index a6b25bc39..9fd1ca305 100755 --- a/tftrt/examples-cpp/image_classification/frozen-graph/README.md +++ b/tftrt/examples-cpp/image_classification/frozen-graph/README.md @@ -1,5 +1,5 @@ - + # TF-TRT C++ Image Recognition Demo diff --git a/tftrt/examples-cpp/image_classification/saved-model/README.md b/tftrt/examples-cpp/image_classification/saved-model/README.md index 716050b50..6c2322331 100755 --- a/tftrt/examples-cpp/image_classification/saved-model/README.md +++ b/tftrt/examples-cpp/image_classification/saved-model/README.md @@ -1,4 +1,5 @@ + # TF-TRT C++ Image Recognition Demo