question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Help with Pose Detection tutorial setup

See original GitHub issue

Plugin Version or Commit ID

latest

Unity Version

2021.2.7f1

Your Host OS

mac OS Monterey

Target Platform

UnityEditor

Description

Hello I’m trying to create separate model for pose detection by tweaking steps from face mesh tutorial. I’ve used this config https://github.com/google/mediapipe/blob/c6c80c37452d0938b1577bd1ad44ad096ca918e0/mediapipe/graphs/pose_tracking/pose_tracking_cpu.pbtxt

I’ve copy pasted code from FaceMesh.cs and changed to resourcemanager to pose_detection.bytes yield return _resourceManager.PrepareAssetAsync("pose_detection.bytes"); Code is attached PoseEstimation.cs.txt

however following error comes up MediaPipeException: INTERNAL: Graph has errors: Calculator::Process() for node "poserenderercpu__RecolorCalculator" failed: ; RET_CHECK failure (external/com_google_mediapipe/mediapipe/calculators/image/recolor_calculator.cc:241) input_mat.channels() == 3

Also attached files. I guess it has something to do with image texture channels or something any ideas how to solve this?

Code to Reproduce the issue

pose_tracking_cpu.txt PoseEstimation.cs.txt

Additional Context

No response

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
homulercommented, Jul 20, 2022

I’ve use this code however it throws some exception. at /NormalizedLandmarkListVectorPacket.cs:29

Which kind of exception occured? Please share the complete error message.

var poseLandmarksStream = new OutputStream<NormalizedLandmarkListVectorPacket, List<NormalizedLandmarkList>>(_graph, "pose_landmarks");

I think the stream type is wrong.

var poseLandmarksStream = new OutputStream<NormalizedLandmarkListPacket, NormalizedLandmarkList>(_graph, "pose_landmarks");

Also will appreciate if you can share documentation links maybe.

What document are you talking about?

1reaction
homulercommented, Jul 17, 2022

The most straightforward way is to change the input format from RGBA to RGB.

@@ -43,9 +43,9 @@ namespace Mediapipe.Unity.Tutorial
 
       _screen.rectTransform.sizeDelta = new Vector2(_width, _height);
 
-      _inputTexture = new Texture2D(_width, _height, TextureFormat.RGBA32, false);
+      _inputTexture = new Texture2D(_width, _height, TextureFormat.RGB24, false);
       _inputPixelData = new Color32[_width * _height];
-      _outputTexture = new Texture2D(_width, _height, TextureFormat.RGBA32, false);
+      _outputTexture = new Texture2D(_width, _height, TextureFormat.RGB24, false);
       _outputPixelData = new Color32[_width * _height];
 
       _screen.texture = _outputTexture;
@@ -67,7 +67,7 @@ namespace Mediapipe.Unity.Tutorial
       while (true)
       {
         _inputTexture.SetPixels32(_webCamTexture.GetPixels32(_inputPixelData));
-        var imageFrame = new ImageFrame(ImageFormat.Types.Format.Srgba, _width, _height, _width * 4, _inputTexture.GetRawTextureData<byte>());
+        var imageFrame = new ImageFrame(ImageFormat.Types.Format.Srgb, _width, _height, _width * 3, _inputTexture.GetRawTextureData<byte>());
         var currentTimestamp = stopwatch.ElapsedTicks / (System.TimeSpan.TicksPerMillisecond / 1000);
         _graph.AddPacketToInputStream("input_video", new ImageFramePacket(imageFrame, new Timestamp(currentTimestamp))).AssertOk();

Another option is to stop rendering annotations using MediaPipe (see the Pose Tracking sample).

Also will appreciate if you can share of clarify whenever code for getting landmark coordinates is the same as in tutorial.

Sorry, I don’t get what you mean by “whenever code”.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Pose Detection Using Computer Vision
Building Your Own Pose Detection Model from scratch · 1. Installing the Dependencies · 2. Load the Dataset and Preprocess · 3. Train...
Read more >
Pose landmark detection guide | MediaPipe
The MediaPipe Pose Landmarker task lets you detect landmarks of human bodies in an image or video. You can use this task to...
Read more >
Getting Started with ML5.js — Tutorial Part IV: Yoga Pose ...
In this tutorial, we will show you how to build a yoga pose detector in your browser. First, we need to use PoseNet...
Read more >
Pose Estimation in 7 Minutes — 30 fps on CPU Tutorial
So, the Kinect performed pose estimation to approximate where people are in the room and segment their joints and occupants using 3D depth ......
Read more >
Pose estimation | TensorFlow Lite
Pose estimation is the task of using an ML model to estimate the pose of a person from an image or a video...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found