question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Error when trying to use Hair Segmentation

See original GitHub issue

Plugin Version or Commit ID

v0.10.0

Unity Version

2021.3.3f1

Your Host OS

Windows 10 Home

Target Platform

UnityEditor

Description

I’m currently trying to extend the solution of the tutorial that can be found on the wiki of this project.

My goal is to run both face mesh and hair segmentation simultaneously. the face with the mesh should go to one canvas (screen1) and the hair segmentation to another (screen2)

I modiffied the code from the tutorial like this:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using Mediapipe.Unity.CoordinateSystem;

using Stopwatch = System.Diagnostics.Stopwatch;

namespace Mediapipe.Unity.Tutorial
{
  public class FaceMesh : MonoBehaviour
  {
    [SerializeField] private TextAsset _configAsset;
    [SerializeField] private TextAsset _configAssetHair;
    [SerializeField] private RawImage _screen1;
    [SerializeField] private RawImage _screen2;
    [SerializeField] private int _width;
    [SerializeField] private int _height;
    [SerializeField] private int _fps;

    private CalculatorGraph _graph1;
    private CalculatorGraph _graph2;
    private ResourceManager _resourceManager;

    private WebCamTexture _webCamTexture;
    private Texture2D _inputTexture;
    private Color32[] _inputPixelData;
    private Texture2D _outputTexture1;
    private Color32[] _outputPixelData1;
    private Texture2D _outputTexture2;
    private Color32[] _outputPixelData2;

    private IEnumerator Start()
    {
      if (WebCamTexture.devices.Length == 0)
      {
        throw new System.Exception("Web Camera devices are not found");
      }
      var webCamDevice = WebCamTexture.devices[0];
      _webCamTexture = new WebCamTexture(webCamDevice.name, _width, _height, _fps);
      _webCamTexture.Play();

      yield return new WaitUntil(() => _webCamTexture.width > 16);

      _inputTexture = new Texture2D(_width, _height, TextureFormat.RGBA32, false);
      _inputPixelData = new Color32[_width * _height];

      _screen1.rectTransform.sizeDelta = new Vector2(_width, _height);
      _outputTexture1 = new Texture2D(_width, _height, TextureFormat.RGBA32, false);
      _outputPixelData1 = new Color32[_width * _height];
      _screen1.texture = _outputTexture1;

      _screen2.rectTransform.sizeDelta = new Vector2(_width, _height);
      _outputTexture2 = new Texture2D(_width, _height, TextureFormat.RGBA32, false);
      _outputPixelData2 = new Color32[_width * _height];
      _screen2.texture = _outputTexture2;

      _resourceManager = new LocalResourceManager();
      yield return _resourceManager.PrepareAssetAsync("face_detection_short_range.bytes");
      yield return _resourceManager.PrepareAssetAsync("face_landmark_with_attention.bytes");
      yield return _resourceManager.PrepareAssetAsync("hair_segmentation.bytes");

      var stopwatch = new Stopwatch();

      // first graph
      _graph1 = new CalculatorGraph(_configAsset.text);
      var outputVideoStream1 = new OutputStream<ImageFramePacket, ImageFrame>(_graph1, "output_video");
      var multiFaceLandmarksStream1 = new OutputStream<NormalizedLandmarkListVectorPacket, List<NormalizedLandmarkList>>(_graph1, "multi_face_landmarks");
      outputVideoStream1.StartPolling().AssertOk();
      multiFaceLandmarksStream1.StartPolling().AssertOk();
      _graph1.StartRun().AssertOk();

      // second graph
      _graph2 = new CalculatorGraph(_configAssetHair.text);
      var outputVideoStream2 = new OutputStream<ImageFramePacket, ImageFrame>(_graph2, "hair_mask");
      outputVideoStream2.StartPolling().AssertOk();
      _graph2.StartRun().AssertOk();

      stopwatch.Start();

      var screenRect1 = _screen1.GetComponent<RectTransform>().rect;
      var screenRect2 = _screen2.GetComponent<RectTransform>().rect;

      while (true)
      {
        _inputTexture.SetPixels32(_webCamTexture.GetPixels32(_inputPixelData));
        var imageFrame1 = new ImageFrame(ImageFormat.Types.Format.Srgba, _width, _height, _width * 4, _inputTexture.GetRawTextureData<byte>());
        var imageFrame2 = new ImageFrame(ImageFormat.Types.Format.Srgba, _width, _height, _width * 4, _inputTexture.GetRawTextureData<byte>());
        var currentTimestamp = stopwatch.ElapsedTicks / (System.TimeSpan.TicksPerMillisecond / 1000);

        _graph1.AddPacketToInputStream("input_video", new ImageFramePacket(imageFrame1, new Timestamp(currentTimestamp))).AssertOk();
        _graph2.AddPacketToInputStream("input_video", new ImageFramePacket(imageFrame2, new Timestamp(currentTimestamp))).AssertOk();

        yield return new WaitForEndOfFrame();

        if (outputVideoStream1.TryGetNext(out var outputVideo1))
        {
          if (outputVideo1.TryReadPixelData(_outputPixelData1))
          {
            _outputTexture1.SetPixels32(_outputPixelData1);
            _outputTexture1.Apply();
          }
        }

        if (multiFaceLandmarksStream1.TryGetNext(out var multiFaceLandmarks1))
        {
          if (multiFaceLandmarks1 != null && multiFaceLandmarks1.Count > 0)
          {
            foreach (var landmarks in multiFaceLandmarks1)
            {
              // top of the head
              var topOfHead = landmarks.Landmark[10];
              Debug.Log($"Unity Local Coordinates: {screenRect1.GetPoint(topOfHead)}, Image Coordinates: {topOfHead}");
            }
          }
        }

        // Hair segmentation
        if (outputVideoStream2.TryGetNext(out var outputVideo2))
        {
          if (outputVideo2.TryReadPixelData(_outputPixelData2))
          {
            _outputTexture2.SetPixels32(_outputPixelData2);
            _outputTexture2.Apply();
          }
        }
      }
    }

    private void OnDestroy()
    {
      if (_webCamTexture != null)
      {
        _webCamTexture.Stop();
      }

      if (_graph1 != null)
      {
        try
        {
          _graph1.CloseInputStream("input_video").AssertOk();
          _graph1.WaitUntilDone().AssertOk();
        }
        finally
        {
          _graph1.Dispose();
        }
      }
    }
  }
}

for the graph for hair segmentation I’m trying to use the hair_segmentation_cpu.txt that can be found on the sample scene for Hair segmentation.

But I’m getting the following error

MediaPipeException: INVALID_ARGUMENT: ValidateRequiredSidePackets failed to validate: 
; Side packet "input_horizontally_flipped" is required but was not provided.
; Side packet "input_rotation" is required but was not provided.
; Side packet "input_vertically_flipped" is required but was not provided.
; Side packet "output_horizontally_flipped" is required but was not provided.
; Side packet "output_rotation" is required but was not provided.
; Side packet "output_vertically_flipped" is required but was not provided.
Mediapipe.Status.AssertOk () (at Packages/com.github.homuler.mediapipe/Runtime/Scripts/Framework/Port/Status.cs:147)
Mediapipe.Unity.Tutorial.FaceMesh+<Start>d__17.MoveNext () (at Assets/MediaPipeUnity/Tutorial/Official Solution/FaceMesh.cs:78)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <8b27195d2ee14da7b6fd1e5435850f80>:0)

The line 78 from Assets/MediaPipeUnity/Tutorial/Official Solution/FaceMesh.cs is

_graph2.StartRun().AssertOk();

Code to Reproduce the issue

No response

Additional Context

No response

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
jcelsicommented, Aug 3, 2022

That was the problem. I change the input format to RGB24 and _width * 4 to _width * 3

Thank you very much imagen

0reactions
homulercommented, Aug 3, 2022

The format of imageFrame2 is declared as RGB24 (or Srgb) but the given data is actually RGBA32.

_inputTexture = new Texture2D(_width, _height, TextureFormat.RGBA32, false);
_outputTexture2 = new Texture2D(_width, _height, TextureFormat.RGB24, false);
// ...
_inputTexture.SetPixels32(_webCamTexture.GetPixels32(_inputPixelData));
var imageFrame1 = new ImageFrame(ImageFormat.Types.Format.Srgba, _width, _height, _width * 4, _inputTexture.GetRawTextureData<byte>());
var imageFrame2 = new ImageFrame(ImageFormat.Types.Format.Srgb, _width, _height, _width * 3, _inputTexture.GetRawTextureData<byte>());
Read more comments on GitHub >

github_iconTop Results From Across the Web

Hair Segmentation on Android building Error · Issue #15
Please follow the instructions to set $ANDROID_HOME and $ANDROID_NDK_HOME . Then, try the build command again. Let me know if it still doesn't ......
Read more >
Hair Segmentation with Mediapipe error
Hello guys,i am trying to make an app that can change the color of your hair using Android and Mediapipe (personal project).
Read more >
Can't find docs on grabCut..... (Hair segmentation) (C++)
So I want to make a software that will segment hair and color it to a different color, I want to do it...
Read more >
Segmentation fault when trying to enable Hair dynamics #20458
I'm using Blender 2.5 from a Subversion and I've found this error: when I'm creating particle system with type "Hair" and "Amount" setting...
Read more >
Meta Spark Community | Hair segmentation confirmed ...
Hair segmentation confirmed, reverse engineering (joke, basic search) won the game. I found trained hair segmentation model inside of app. The coolest...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found