via Using an iPhone as a 3D Mouse with Multipeer Connectivity in Swift.
My recent experiment with CoreMotion, CoreMotion Controlled 3D Sketching on an iPhone with Swift, got me wondering if it would be possible to use an iPhone as a 3D mouse to control another application on a separate device. It turns out that with Apple’s Multipeer Connectivity framework, it’s not only possible, it’s pretty awesome too!
The Multipeer Connectivity framework provides peer to peer communication between iOS devices over Wi-Fi and Bluetooth. As well as allowing devices to send discrete bundles of information, it also supports streaming which is what I need to allow my iPhone to transmit a continuous stream of data describing its attitude (roll, pitch and yaw) in 3D space.
via How to Integrate OculusRift SDK with Unity3D
The Rift is a virtual reality head-mounted display, developed by Oculus. This bring amazing virtual experience to its users. For more information refer to this http://en.wikipedia.org/wiki/Oculus_Rift
Oculus rift offers SDKs to integrate Rift in different software like Unity, Unreal Engine. In this blog I would be concentrating on integrating Oculus Rift SDK 0.4.4 with in Unity.
We are going to discuss Oculus Rift SDK 0.4.4 using DK2 device with
– 960 x 1080 HD resolution per eye
– 75 Hz, 72 Hz, 60 Hz refresh rate
– 2 ms (75 Hz), 3 ms (72 Hz), full Persistence (60 Hz)
– Gyroscope, Accelerometer, Magnetometer sensors
– 1000 Hz update rate
Another cool thing from Intel® technology is the Intel® Edison platform. An IoT developer can build prototypes that gather sensor information or send control to other devices and create innovative products. Intel® Edison already has integrated Wi-Fi and Bluetooth Low-energy* (LE) so we can connect to the internet and create a real Internet of Things solutions.
A great article using Intel® RealSense™ Technology in combination with the Intel® Edison development platform was written by Peter Ma. In this article there are two examples of such applications. One uses the Intel® RealSense™ 3D Camera as input and the Intel® Edison board as output. The SDK triggers an LED light on the board. In the second example, which uses the Intel® Edison board as input and the Intel® RealSense™ 3D Camera as output, voice synthesis is used to speak the sensor data from the board.
- Intel® Edison board with the Arduino* breakout board
- Seeed Grove* – Starter Kit Plus – Intel® IoT Edition
- 4th generation (or later) Intel® Core™ processor
- 8GB free hard disk space
- USB 3.0
- An Intel® RealSense™ 3D Camera F200 (system-integrated or peripheral version)
- An Server equiped with Node.js
via Build a High-Performance Mobile App With Famo.us and Manifold.js – Tuts+ Code Tutorial.
Jenn Simmons of the Web Platform Podcast recently had Famo.us CEO, Steve Newcomb on the podcast to discuss the mobile performance and their upcoming mixed mode. This was perfect timing, as Microsoft had just released ManifoldJS, a tool which allows you to package your web experience as native apps across Android, iOS, and Windows. I wanted to put these two technologies to the test.
In short, I wanted to determine if Famo.us does actually have great mobile performance, as well as have an understanding of how straightforward the process was for packaging my web application as a mobile app.
via CoreMotion Controlled 3D Sketching on an iPhone with Swift.
I was really impressed by a demo of InkScape that I read about in Creative Applications recently. InkScape is an Android app which allows users to sketch in a 3D space that’s controlled by the device’s accelerometer. It’s inspired byRhonda which pre-dates accelerometers and uses a trackball instead.
Of course, my first thought was, “how can I do this in Swift?“. I’ve never done any work with CoreMotion before, so this was a good opportunity to learn some new stuff. My first port of call was this excellent article on iOS motion at NSHipster.
My plan for the application was to have a SceneKit scene with a motion controlled camera rotating around an offset pivot point at the centre of the SceneKit world. With each touchesBegan(), I’d create a new flat box in the centre of the screen that aligned with the camera and on touchesMoved(), I’d use the touch location to append to a path that I’d draw onto a CAShapeLayer that I’d use as the diffuse material for the newly created geometry.
Easy! Let’s break it down:
This article is part of a web dev tech series from Microsoft. Thank you for supporting the partners who make SitePoint possible.
Can’t wait to see what this article is about? Watch this video.
Feel free to ping me on Twitter if you want to discuss this article!
via HexaFlip: A Flexible 3D Cube Plugin.