🎉 #Gate xStocks Trading Share# Posting Event Is Ongoing!
📝 Share your trading experience on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 End at: July 9, 16:00 UTC
Show off your trading on Gate Squ
Microsoft Open Source New Version of Phi-4: Inference Efficiency Rises 10 Times, Can Run on Laptops
Jin10 data reported on July 10, this morning, Microsoft open sourced the latest version of the Phi-4 family, Phi-4-mini-flash-reasoning, on its official website. The mini-flash version continues the Phi-4 family’s characteristics of small parameters and strong performance, specifically designed for scenarios limited by Computing Power, memory, and latency, capable of running on a single GPU, suitable for edge devices like laptops and tablets. Compared to the previous version, mini-flash utilizes Microsoft’s self-developed innovative architecture, SambaY, resulting in a big pump in inference efficiency by 10 times, with average latency reduced by 2-3 times, achieving a significant improvement in overall inference performance.