Dataset Viewer
aid
string | mid
string | abstract
string | related_work
string | ref_abstract
dict | title
string | text_except_rw
string | total_words
int64 |
---|---|---|---|---|---|---|---|
cs9809108 | 2949225035 | We present our approach to the problem of how an agent, within an economic Multi-Agent System, can determine when it should behave strategically (i.e. learn and use models of other agents), and when it should act as a simple price-taker. We provide a framework for the incremental implementation of modeling capabilities in agents, and a description of the forms of knowledge required. The agents were implemented and different populations simulated in order to learn more about their behavior and the merits of using and learning agent models. Our results show, among other lessons, how savvy buyers can avoid being cheated'' by sellers, how price volatility can be used to quantitatively predict the benefits of deeper models, and how specific types of agent populations influence system behavior. | Within the MAS community, some work @cite_15 has focused on how artificial AI-based learning agents would fare in communities of similar agents. For example, @cite_6 and @cite_8 show how agents can learn the capabilities of others via repeated interactions, but these agents do not learn to predict what actions other might take. Most of the work in MAS also fails to recognize the possible gains from using explicit agent models to predict agent actions. @cite_9 is an exception and gives another approach for using nested agent models. However, they do not go so far as to try to quantify the advantages of their nested models or show how these could be learned via observations. We believe that our research will bring to the foreground some of the common observations seen in these research areas and help to clarify the implications and utility of learning and using nested agent models. | {
"abstract": [
"In multi-agent environments, an intelligent agent often needs to interact with other individuals or groups of agents to achieve its goals. Agent tracking is one key capability required for intelligent interaction. It involves monitoring the observable actions of other agents and inferring their unobserved actions, plans, goals and behaviors. This article examines the implications of such an agent tracking capability for agent architectures. It specifically focuses on real-time and dynamic environments, where an intelligent agent is faced with the challenge of tracking the highly flexible mix of goal-driven and reactive behaviors of other agents, in real-time. The key implication is that an agent architecture needs to provide direct support for flexible and efficient reasoning about other agents' models. In this article, such support takes the form of an architectural capability to execute the other agent's models, enabling mental simulation of their behaviors. Other architectural requirements that follow include the capabilities for (pseudo-) simultaneous execution of multiple agent models, dynamic sharing and unsharing of multiple agent models and high bandwidth inter-model communication. We have implemented an agent architecture, an experimental variant of the Soar integrated architecture, that conforms to all of these requirements. Agents based on this architecture have been implemented to execute two different tasks in a real-time, dynamic, multi-agent domain. The article presents experimental results illustrating the agents' dynamic behavior.",
"I. Introduction, 488. β II. The model with automobiles as an example, 489. β III. Examples and applications, 492. β IV. Counteracting institutions, 499. β V. Conclusion, 500.",
"The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. This paper outlines a gradual evolution in our formal conception of intelligence that brings it closer to our informal conception and simultaneously reduces the gap between theory and practice.",
""
],
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_6",
"@cite_8"
],
"mid": [
"1528079221",
"2156109180",
"1591263692",
""
]
} | Learning Nested Agent Models in an Information Economy | In open, multi-agent systems, agents can come and go without any central control or guidance, and thus how and which agents interact with each other will change dynamically. Agents might try to manipulate the interactions to their individual benefits, at the cost of the global efficiency. To avoid this, the protocols and mechanisms that the agents engage in might be constructed to make manipulation irrational (Rosenschein and Zlotkin, 1994), but unfortunately this strategy is only applicable in restricted domains. By situating agents in an economic society, as we do in the University of Michigan Digital Library (UMDL), we can make each agent responsible for making its own decisions about when to buy/sell and who to do business with (Atkins et al., 1996). A market-based infrastructure, built around computational auction agents, serves to discourage agents from engaging in strategic reasoning to manipulate the system by keeping the competitive pressures high. However, since many instances can arise where imperfections in competition could be exploited, agents might benefit from strategic reasoning, either by manipulating the system or not allowing others to manipulate them. But strategic reasoning requires effort. An agent in an information economy like the UMDL must therefore be capable of strategic reasoning and of determining when it is worthwhile to invest in strategic reasoning rather than letting its welfare rest in the hands of the market mechanism.
In this paper, we present our approach to the problem of how an agent, within an economic MAS, can determine when it should behave strategically, and when it should act as a simple price-taker. More specifically, we let the agent's strategy consist of learning nested models of the other agents, so the decision it must make refers to which of the models will give it greater gains. We show how, in some circumstances, agents benefit by learning and using models of others, while at other times the extra effort is wasted. Our results point to metrics that can be used to make quantitative predictions as to the benefits obtained from learning and using deeper models.
Description of the UMDL
The UMDL project is a large-scale, multidisciplinary effort to design and build a flexible, scalable infrastructure for rendering library services in a digital networked environment. In order to meet these goals, we chose to implement the library as a collection of interacting agents, each specialized to perform a particular task. These agents buy and sell goods/services from each other, within an artificial economy, in an effort to make a profit. Since the UMDL is an open system, which will allow third parties to build and integrate their own agents into the architecture, we treat all agents as purely selfish.
Implications of the information economy.
Information goods/services, like those provided in the UMDL, are very hard to compartmentalize into equivalence classes that all agents can agree on. For example, if a web search engine service is defined as a good, then all agents providing web search services can be considered as selling the same good. It is likely, however, that a buyer of this good might decide that seller s1 provides better answers than seller s2. We cannot possibly hope to enumerate the set of reasons an agent might have for preferring one set of answers (and thus one search agent) over another, and we should not try to do so. It should be up to the individual buyers to decide what items belong to the same good category, each buyer clustering items in possibly different ways.
This situation is even more evident when we consider an information economy rooted in some information delivery infrastructure (e.g. the Internet). There are two main characteristics that set this economy apart from a traditional economy.
β’ There is virtually no cost of reproduction. Once the information is created it can be duplicated virtually for free.
β’ All agents have virtually direct and free access to all other agents.
If these two characteristics are present in an economy, it is useless to talk about supply and demand, since supply is practically infinite for any particular good and available everywhere. The only way agents can survive in such an economy is by providing value-added services that are tailored to meet their customers' needs. Each provider will try to differentiate his goods from everyone else's while each buyer will try to find those suppliers that best meet her value function. We propose to build agents that can achieve these goals by learning models of other agents and making strategic decisions based on these models. These techniques can also be applied, with variable levels of efficacy, to traditional economies.
A Simplified Model of the UMDL
In order to capture the main characteristics of the UMDL, and to facilitate the development and testing of agents, we have defined an "abstract" economic model. We define an economic society of agents as one where each agent is either a buyer or a seller of some particular good. The set of buyers is B and the set of sellers is S. These agents exchange goods by paying some price p β P , where P is a finite set. The buyers are capable of assessing the quality of a good received and giving it some value q β Q, where Q is also a finite set.
The exchange protocol, seen in Figure 1, works as follows: When a buyer b β B wants to buy a good g, she will advertise this fact. Each seller s β S that sells that good will give his bid in the form of a price p g s . The buyer will pick one of these and will pay the seller. All agents will be made aware of this choice along with the prices offered by all the sellers.. The winning seller will then return 1 the specified good. Note that there is no law that forces the seller to return a good of any particular quality. For example, an agent that sells web search services returns a set of hits as its good. Each buyer of this good might determine its quality based on the time it took for the response to arrive, the number of hits, the relevance of the hits, or any combination of these and/or other features. Therefore, it would usually be impossible to enforce a quality measure that all buyers can agree with.
It is thus up to the buyer to assess the quality q of the good received. Each buyer b also has a value function V g b (p, q) for each good g β G that she might wish to buy. The function returns a number that represents the value that b assigns to that particular good at that particular price and quality. Each seller s β S, on the other hand, has a cost c g s associated with each good he can produce. Since we assume that costs and payments are expressed in the same units (i.e. money) then, if seller s gets paid p for good g, his profit will be Profit(p, c g s ) = p β c g s . The buyers, therefore, have the goal of maximizing the value they get for their transactions, while the sellers have the goal of maximizing their profits.
information. These instances are defined in part by the set of other agents present, their capabilities and preferences, and the dynamics of the system. In order to precisely determine what these instances are, and in the hopes of providing a more general framework for studying the effects of increased agent-modeling capabilities within our economic model, we have defined a set of techniques that our agents can use for learning and using models.
We divide the agents into classes that correspond to their modeling capabilities. The hierarchy we present is inspired by the Recursive Modeling Method (Gmytrasiewicz, 1996), but is function-based rather than matrixbased, and includes learning. We will first describe our agents at the knowledge level, stating only the type of knowledge the agents are either trying to acquire through learning, or already have (i.e. knowledge that was directly implemented by the designers of the agents), and will then explain the details of how this knowledge was implemented.
At the most abstract level, we can say that every agent i is trying to learn the oracle decision function β i : w β a i , which maps the state w of the world into the action a i that the agent should take in that state. This function will not be fixed throughout the lifetime of the agent because the other agents are also engaged in some kind of learning themselves. The agents that try to directly learn β i (w) we refer to as 0-level agents, because they have no explicit models of other agents. In fact, they are not aware that there are other agents in the world. Any such agent i will learn a decision function Ξ΄ i : w β a i where w is what agent i knows about its external world and a i is its rational action in that state. For example, a web search agent might look at the going price for web searches, in order to determine how much to charge for its service.
Agents with 1-level models of other agents, on the other hand, are aware that there are other agents out there but have no idea what the "interior" of these agents looks like. They have two kinds of knowledge-a set of functions Ξ΄ ij : w β a j which capture agent i's model of each of the other agents (j), and Ξ΄ i : (w, a βi ) β a i which captures i's knowledge of what action to take given w and the collective actions a βi the others will take. We define a βi = {a 1 Β· Β· Β· a iβ1 , a i+1 Β· Β· Β· a n }, where n is the number of agents. An agent's model of others might not be correct; therefore, it is not always true that Ξ΄ j (w) = Ξ΄ ij (w). The Ξ΄ ij (w) knowledge for all j = i turns out to be easier to learn than the joint action Ξ΄ i (w, a βi ) because the set of possible hypotheses is smaller.
Agents with 2-level models are assumed to have deeper knowledge about the other agents; that is, they have knowledge of the form Ξ΄ ij : (w, a βj ) β a j . This knowledge tells them how others determine which action to take. They also know what actions others think others are going to take, i.e. Ξ΄ ijk : w β a k , and (like 1-level modelers) what action they should take given others' actions, Ξ΄ i : (w, a βi ) β a i . Again, the Ξ΄ ijk (w) is easier to learn that the other two, as long as all agents use the same features to discriminate among the different worlds (i.e. share the same w).
Populating the knowledge
If the different level agents had to learn all the knowledge then, since the 0level agents have a lot less knowledge to learn, they would learn it much faster. However, in the economic domain, it is likely that the designer has additional knowledge which could be incorporated into the agents. The agents we built incorporated extra knowledge along these lines. We decided that 0-level agents would learn all their knowledge by tracking their actions and the rewards they got. These agents, therefore, receive no extra domain knowledge from the designers and learn everything from experience. 1-level agents, on the other hand, have a priori knowledge of what action they should take given the actions that others will take. That is, while they try to learn knowledge of the form Ξ΄ ij (w) by observing the actions others take (i.e. in a form of supervised learning where the other agents act as tutors), they already have knowledge of the form Ξ΄ i (w, a βi ). In our economic domain, it is reasonable to assume that agents have this knowledge since, in fact, this type of knowledge can be easily generated. That is, if I know what all the other sellers are going to bid, and the prices that the buyer is willing to pay, then it is easy for me to determine which price to bid. We must also point out that in this domain, the Ξ΄ i (w, a βi ) knowledge cannot be used by a 0-level agent. If this knowledge had said, for instance, that from some state w agent i will only ever take one of a few possible actions, then this knowledge could have been used to eliminate impossibilities from the Ξ΄ i (w) knowledge of a 0-level agent. However, this situation never arises in our domain because, as we shall see in the following Sections, the states used by the agents permit the set of reasonable actions to always be equal to the set of all possible actions.
The 2-level agents learn their Ξ΄ ijk (w) knowledge from observations of others' actions, under the already stated assumption that there is common knowledge of the fact that all agents see the actions taken by all. The rest of the knowledge, i.e. Ξ΄ ij (w, a βj ) and Ξ΄ i (w, a βi ), is built into the 2-level agents a priori. As with 1-level agents, we cannot use the Ξ΄ ij (w, a βj ) knowledge to add Ξ΄ ij (w) knowledge to a 1-level modeler, because other agents are also free to take any one of the possible actions in any state of the world. There are many reasonable ways to explain how the 2-level agents came to possess the Ξ΄ ij (w, a βj ) knowledge. It could be, for instance, that the designer assumed that the other designers would build 1-level agents with the same knowledge we just described. This type of recursive thinking (i.e. "they will do just as I did, so I must do one better"), along with the obvious expansion of the knowledge structure, could be used to generate n-level agents, but so far we have concentrated only on the first three levels. The different forms of knowledge, and their form of acquisition, are summarized in Table 1. In the following sections, we talk about each one of these agents in more detail and give some specifics on their implementation. Our current model emphasizes transactions over a single good, so each agent is only a buyer or a seller, but cannot be both.
Agents with 0-level models
Agents with 0-level models must learn everything they know from observations they make about the environment, and from any rewards they get. In our economic society this means that buyers see the bids they receive and the good received after striking a contract, while sellers see the request for bids and the profit they made (if any). In general, these agents get some input, take an action, then receive some reward. This framework is the same framework used in reinforcement learning, which is why we decided to use a form of reinforcement learning (Sutton, 1988) (Watkins and Dayan, 1992), for implementing learning in our agents.
Both buyers and sellers will use the equations in the next few sections for determining what actions to take. But, with a small probability Η« they will choose to explore, instead of exploit, and will pick their actions at random (except for the fact that sellers never bid below cost). The value of Η« is initially 1 but decreases with time to some empirically chosen, fixed minimum value Η« min . That is,
Η« t+1 = Ξ³Η« t if Ξ³Η« t > Η« min Η« min otherwise
where 0 < Ξ³ < 1 is some annealing factor. Figure 1: View of the protocol. We show only one buyer B and three sellers S1, S2, and S3. At time 1 the buyer requests bids for some good. At time 2 the sellers send their prices for that good. At time 3 the buyer picks one of the bids, pays the seller the amount and then, at time 4, she receives the good.
Level
Form of Knowledge Method of Acquisition 0-level
Ξ΄ i (w) Reinforcement Learning 1-level Ξ΄ i (w, a βi ) Previously known Ξ΄ ij (w) Learn from observation 2-level Ξ΄ i (w, a βi ) Previously known Ξ΄ ij (w, a βj ) Previously known Ξ΄ ijk (w) Learn from observation.
Buyers with 0-level models.
A buyer b will start by requesting bids for a good g. She will then receive all bids for good g and will pick the seller:
s * = arg sβS max f g (p g s )(1)
This function implements the buyer's Ξ΄ b (w) which, in this case, can be rephrased as Ξ΄ b (p 1 . . . p |S| ). The function f g (p) returns the value the buyer expects to get if she buys good g at a price of p. It is learned using a simple form of reinforcement learning, namely:
f g t+1 (p) = (1 β Ξ±)f g t (p) + Ξ± Β· V g b (p, q)(2)
Here Ξ± is the learning rate, p is the price b pays for the good, and q is the quality she ascribes to it. The learning rate is initially set to 1 and, like Η«, is decreased until it reaches some fixed minimum value Ξ± min .
Sellers with 0-level models.
When asked for a bid, the seller s will provide one whose price is greater than or equal 2 to the cost for producing it (i.e. p g s β₯ c g s ). From these prices, he will chose the one with the highest expected profit:
p * s = arg pβP max h g s (p)(3)
Again, this function encompasses the seller's Ξ΄ s (g) knowledge, where we now have that the states are the goods being sold w = g, and the actions are prices offered a = p. The function h g s (p) returns the profit s expects to get if he offers good g at a price p. It is also learned using reinforcement learning, as follows:
h g t+1 (p) = (1 β Ξ±)h g t (p) + Ξ± Β· Profit g s (p)(4)
where
Profit g s (p) = p β c g s if his bid is chosen 0 otherwise (5)
Agents with One-level Models
The next step is for an agent to keep one-level models of the other agents. This means that it has no idea of what the interior (i.e. "mental") processes of the other agents are, but it recognizes the fact that there are other agents out there whose behaviors influence its rewards. The agent, therefore, can only model others by looking at their past behavior and trying to predict, from it, their future actions. The agent also has knowledge, implemented as functions, that tells it what action to take, given a probability distribution over the set of actions that other agents can take. In the actual implementation, as shown below, the Ξ΄ i (w, a βi ) knowledge takes into account the fact that the Ξ΄ ij (w) knowledge is constantly being learned and, therefore, is not correct with perfect certainty.
Buyers with one-level models.
A buyer with one-level models can now keep a history of the qualities she ascribes to the goods returned by each seller. She can, in fact, remember the last N qualities returned by some seller s for some good g, and define a probability density function q g s (x) over the qualities x returned by s (i.e. q g s (x) returns the probability that s returns an instance of good g that has quality x). This function provides the Ξ΄ bs (g) knowledge. She can use the expected value of this function to calculate which seller she expects will give her the highest expected value.
s * = arg sβS max E(V g b (p g s , q g s (x))) = arg sβS max 1 |Q| xβQ q g s (x) Β· V g b (p g s , x)(6)
The Ξ΄ b (g, q 1 Β· Β· Β· q |S| ) is given by the previous function which simply tries to maximize the value the buyer expects to get. The buyer does not need to model other buyers since they do not affect the value she gets.
Sellers with one-level models.
Each seller will try to predict what bid the other sellers will submit (based solely on what they have bid in the past), and what bid the buyer will likely pick. A complete implementation would require the seller to remember past combinations of buyers, bids and results (i.e. who was buying, who bid what, and who won). However, it is unrealistic to expect a seller to remember all this since there are at least |P | |S| Β· |B| possible combinations.
However, the seller's one-level behavior can be approximated by having him remember the last N prices accepted by each buyer b for each good g, and form a probability density function m g b (x), which returns the probability that b will accept (pick) price p for good g. The expected value of this function provides the Ξ΄ sb (g) knowledge. Similarly, the seller remembers other sellers' last N bids for good g and forms n g s (y), which gives the probability that s will bid y for good g. The expected value of this function provides the Ξ΄ s (g) knowledge. The seller s can now determine which bid maximizes his expected profits.
p * = arg pβP max(p β c g s ) Β· s β² β{Sβs} p β² βP n g s β² (p β² ) if m g b (p β² ) β€ m g b (p) 0 otherwise(7)
Note that this function also does a small amount of approximation by assuming that s wins whenever there is a tie 3 . The function calculates the best bid by determining, for each possible bid, the product of the profit and the probability that the agent will get that profit. Since the profit for lost bids is 0, we only need to consider the cases where s wins. The probability that s will win can then be found by calculating the product of the probabilities that his bid will beat the bids of each of the other sellers. The function approximates the Ξ΄ s (g, p b , p 1 Β· Β· Β· p |S| ) knowledge.
Agents with Two-level Models
The intentional models we use correspond to the functions used by agents that use one-level models. The agents' Ξ΄ i (w, a βi ) knowledge has again been expanded to take into account the fact that the deeper knowledge is learned and might not be correct. The Ξ΄ ijk (w) knowledge is learned from observation, under the assumption that there is common knowledge of the fact that all agents see the bids given by all agents.
Buyers with two-level models.
Since the buyer receives bids from the sellers, there is no need for her to try to out-guess or predict what the sellers will bid. She is also not concerned with what the other buyers are doing since, in our model, there is an effectively infinite supply of goods. The buyers are, therefore, not competing with each other and do not need to keep deeper models of others.
Sellers with two-level models.
A seller will model other sellers as if they were using the one-level models. That is, he thinks they will model others using policy models and make their decisions using the equations presented in Section 4.3.2. He will try to predict their bids and then try to find a bid for himself that the buyer will prefer more than all the bids of the other sellers. His model of the buyer will also be an intentional model. He will model the buyers as though they were implemented as explained in Section 4.3.1. A seller, therefore, assumes that it has the correct intentional models of other agents.
The algorithm he follows is to first use his models of the sellers to predict what bids p i they will submit. He has a model of the buyer C(s 1 Β· Β· Β· s n , p 1 Β· Β· Β· p n ) β s i , that tells him which seller she might choose given the set of bids p i submitted by each seller s i . The seller s j uses this model to determine which of his bids will bring him higher profit, by first finding the set of bids he can make that will win: P β² = {p j |p j β P, s j = C(s 1 Β· Β· Β· s j Β· Β· Β· s n , p 1 Β· Β· Β· p j Β· Β· Β· p n )}
And from these finding the one with the highest profit:
p * = arg pβP β² max(p β c g s )(9)
These functions provide the Ξ΄ s (g, p b , p 1 Β· Β· Β· p |S| ) knowledge.
Tests
Since there is no obvious way to analytically determine how different populations of agents would interact and, of greater interest to us, how much better (or worse) the agents with deeper models would fare, we decided to implement a society of the agents described above and ran it to test our hypotheses. In all tests, we had 5 buyers and 8 sellers. The buyers had the same value function V b (p, q) = 3q β p, which means that if p = q then the buyers will prefer the seller that offers the higher quality. The quality that they perceived was the same only on average, i.e. any particular good might be thought to have quality that is slightly higher or lower than expected. All sellers had costs equal to the quality they returned in order to support the common sense assumption that quality goods cost more to produce. A set of these buyers and sellers is what we call a population. We tried various populations; within each population we kept constant the agents' modeling levels, the value assessment functions and the qualities returned. The tests involved a series of such populations, each one with agents of different modeling levels, and/or sellers with different quality/costs. We also set Ξ± min = .1, Η« min = .05, and Ξ³ = .99. There were 100 runs done for each population of agents, each run consisting of 10000 auctions (i.e. iterations of the protocol). The lessons presented in the next section are based on the averages of these 100 runs.
Lessons
From our tests we were able to discern several lessons about the dynamics of different populations of agents. Some of these lead to methods that can be used to make quantitative predictions about agents' performance, while others make qualitative assessments about the type of behaviors we might expect. We detail these in the next subsections, and summarize them in Table 2.
Micro versus macro behaviors.
In all tests, we found the behavior for any particular run does not necessarily reflect the average behavior of the system. The prices have a tendency to sometimes reach temporary stable points. These conjectural equilibria, as described in (Hu and Wellman, 1996), are instances when all of the agents' models are correctly predicting the others' behavior and, therefore, the agents do not need to change their models or their actions. These conjectural equilibria points are seldom global optima for the agents. If one of our agents finds itself at one of these equilibrium points, since the agent is always exploring with probability Η«, it will in time discover that this point is only a local optima (i.e. it can get more profit selling/buying at a different price) and will change its actions accordingly. Only when the price is an equilibrium price 4 do we find that the agents continue to forever take the same actions, leaving the price at its equilibrium point. In order to understand the more significant macro-level behaviors of the system, we present results that are based on the averages from many runs. While these averages seem very stable, and a good first step in learning to understand these systems, in the future we will need to address some of the micro-level issues. We do notice from our data that the micro-level behaviors (e.g. temporary conjectural equilibria, price fluctuations) are much more closely tied, usually in intuitive ways, to the agents' learning rate Ξ± and exploration rate Η«. That is, higher rates for both of these lead to more price fluctuations and shorter temporary equilibria.
0-level buyers and sellers.
This type of population is equivalent to a "blind" auction, where the agents only see the price and the good, but are prevented from seeing who the seller (or buyer) was. As expected, we found that an equilibrium is reached as long as all the sellers are providing the same quality. This is the case for population 1 in Figure 2. Otherwise, if the sellers offer different quality goods, the price fluctuates as the buyers try to find the price that on the whole returns the best quality, and the sellers try to find the price 5 the buyers favor. In these populations, the sellers offering the higher quality, at a higher cost, lose money. Meanwhile, sellers offering lower quality, at a lower cost, earn some extra income by selling their low quality goods to buyers that expect, and are paying for, higher quality. As more sellers start to offer lower quality, we find that the mean price actually increases, evidently because price acts as a signal for quality and the added uncertainty makes the higher prices more likely to give the buyer a higher value. We see this in Figure 2, where population 1 has all sellers returning the same quality while in each successive population more agents offer lower quality. The price distribution for population 1 is concentrated on 9, but for populations 2 through 6 it flattens and shifts to the right, increasing the mean price. It is The prices are 0 Β· Β· Β· 19. The columns represent the percentage of time the good was sold at each price, in each population. In p1 sellers return qualities {8, 8, 8, 8, 8, 8, 8, 8}, in p2 its {8, 8, 8, 8, 8, 8, 7, 8}, and so on such that by p8 its {1, 2, 3, 4, 5, 6, 7, 8}. The highest peak in all populations corresponds to price 9. only by population 7 when it starts to shift back to the left, thus reducing the mean price, as seen in Figure 3. That is, it is only after a significant number of sellers start to offer lower quality that we see the mean price decrease. 6.3 0-level buyers and sellers, plus one 1-level seller.
In these population sets we explored the advantages that a 1-level seller has over identical 0-level sellers. The advantage was non-existent when all sellers returned the same quality (i.e. when the prices reached an equilibrium as shown in population 1 in Figure 4), but increased as the sellers started to diverge in the quality they returned. In order to make these findings useful when building agents, we needed a way to make quantitative predictions as to the benefits of keeping 1-level models. It turns out that these benefits can be predicted, not by the population type as we had first guessed, but by the price volatility. We define volatility as the number of times the price changes from one auction to the next, divided by the total number of auctions. Figure 5 shows the linear relation between volatility and the percentage of times the 1-level seller wins. The two lines correspond to two "types" of volatility. The first line includes populations 1 through 5 (p1-p5). It reflects the case where the buyers' second-favorite (and possibly, the third, fourth, etc.) equilibrium price is greater than her most preferred price. In these cases the buyers and sellers fight among the two most preferred prices, the sellers pulling towards the higher equilibrium price and the buyers towards the lower one, as shown by the two peaks in populations 4 and 5 in Figure 4. The other line, which includes populations 6 and 7, corresponds to cases where the buyers' preferred equilibrium price is greater than the runner-ups. In these cases there is no contest between two equilibria. We observe only one peak in the price distribution for these populations.
The slope of these lines can be easily calculated and the resulting function can be used by a seller agent for making a quantitative prediction as to how much he would benefit by switching to 1-level models. That is, he could measure price volatility, multiply it by the appropriate slope, and the resulting number would be the percentage of times he would win. However, for this to work the agent needs to know that all eight buyers and five sellers are 0-level modelers because different types of populations lead to different slopes. Also, slight changes in our learning parameters (.02 β€ Η« min β€ .08 and .05 β€ Ξ± min β€ .2) lead to slight changes in the slopes so these would have to be taken into account if the agent is actively changing its parameters. We also want to make clear a small caveat, which is that the volatility that is correlated to the usefulness of keeping 1-level models is the volatility of the system with the agent already doing 1-level modeling. Fortunately, our experiments show that having one agent change from 0-level to 1-level does not have a great effect on the volatility as long as there are enough (i.e. more than five or so) other sellers.
The reason volatility is such a good predictor is that it serves as an accurate assessment of how dynamic the system is and, in turn, of the complexity of the learning problem faced by the agents. It turns out that the learning problem faced by 1-level agents is "simpler" than the one faced by 0-level modelers. Our 0-level agents use reinforcement learning to learn a good match between world states and the actions they should take. The 1-level agents, on the other hand, can see the actions other agents take and do not need to learn their models through indirect reinforcements. They instead use a form of supervised learning to learn the models of others. Since 1-level agents need fewer interactions to learn a correct model, their models will, in general, be better than those of 0-level agents in direct proportion to the speed with which the target function changes. That is, in a slow-changing world both of them will have time enough to arrive at approximately correct models, while in a fast-changing world only the 1-level agents will have time to arrive at an approximately correct model. This explains why high price volatility is correlated to an increase in the 1-level agent's performance. However, as we saw, the relative advantages for different volatilities (i.e. the slope in Figure 5) will also depend on the shape of the price distribution and the particular population of agents.
Finally, in all populations where the buyers are 0-level, we saw that it really pays for the sellers to have low costs because this allows them to lower their prices to fit almost any demand. Since the buyers have 0-level models, the sellers with low quality and cost can raise their prices when appropriate, in effect "pretending" to be the high-quality sellers, and make an even more substantial profit. This extra profit comes at the cost of a reduction in the average value that the buyers receive. In other words, the buyers get less value because they are only 0-level agents and are less able to detect the sellers' deception. In the next Section we will see how this is not true for 1-level buyers.
Of course, the 1-level sellers were more successful at this deception strategy than the 0-level sellers. Figure 6 shows the profit of several agents in a population as a function of their cost. We can see how the 0-level agents' profit decreases with increasing costs, and how the 1-level agent's profit is much higher than the 0-level with the same costs. We also notice that, since the 0-level agents are not as successful as the 1-level at taking advantage of their low costs, the first 0-level seller (that returns quality 2) has lower profit than the rest as some of his profit was taken away by the 1-level seller (that returns the same quality).
1-level buyers and 0 and 1-level sellers.
In these populations the buyers have the upper hand. They quickly identify those sellers that provide the highest quality goods and buy exclusively from them. The sellers do not benefit from having deeper models; in fact, Figure 7 shows how the 1-level seller's profit is less than that of a similar 0-level seller because the 1-level seller tries to charge higher prices than the 0-level seller. The 1-level buyers do not fall for this trick-they know what quality to expect, and buy more from the lower-priced 0-level seller(s). We have here a case of erroneous models-1-level sellers assume that buyers are 0-level, and since this is not true, their erroneous deductions lead them to make bad decisions. To stay a step ahead, sellers would need to be 2-level in this case. In Figure 7, the first population has all sellers returning a quality of 8 while by population 7 they are returning qualities of {8, 2, 3, 4, 5, 6, 7, 8}, respectively, with the 1-level always returning quality of 8. We notice that the difference in profits between the 0-level and the 1-level increases with successive populations. This is explained by the fact that in the first population all seven 0-level sellers are returning the same quality, while by population 7 only the 0-level pictured (i.e. the first one) is still returning quality 8. This means that his competition, in the form of other 0-level sellers returning the same quality, decreases for successive populations. Meanwhile, in all populations there is only one 1-level seller who has no competition from other 1-level sellers. To summarize, the 0-level seller's profit is always higher than the similar 1-level seller's, and the difference increases as there are fewer other competing 0-level sellers who offer the same quality.
1-level buyers and several 1-level sellers.
We have shown how 1-level sellers do better, on average, than 0-level sellers when faced with 0-level buyers, but this is not true anymore if too many 0-level sellers decide to become 1-level. Figure 8 shows how the profits of a 1-level seller decrease as he is joined by other 1-level sellers. In this Figure the sellers are returning qualities of {2, 2, 2, 2, 2, 3, 4}. Initially they are all 0-level, then one of the sellers with quality 2 becomes 1-level (he is the seller shown in the Figure), then another one and so on. . . until there is only one 0-level seller with quality two. Then the seller with quality three becomes 1-level and, finally the seller with quality four becomes 1-level. At this point we have six 1-level sellers and one 0-level seller. We can see that with more than four 1-level sellers the 0-level seller is actually making more profit than the similar 1-level seller. The 1-level seller's profit decreases because, as more sellers change from 0 to 1-level, they are competing directly with him since they are offering the same quality and are the same level. Notice that the 1-level seller's curve flattens after four 1-level sellers are present in the population. The reason is that the next sellers to change over to 1-level return qualities of 3 and 4, respectively, so that they do not compete directly with the seller pictured. His profits, therefore, do not keep decreasing. For this test, and other similar tests, we had to use a population of sellers that produce different qualities because, as explained in Section 6.3, if they had returned the same quality then an equilibrium would have been reached which would prevent the 1-level sellers from making a significantly greater profit than the 0-level sellers. 6.6 1-level buyers and 1 and 2-level sellers.
Assuming that the 2-level seller has perfect models of the other agents, we find that he wins an overwhelming percentage of the time. This is true, surprisingly enough, even when some of the 1-level sellers offer slightly higher quality goods. However, when the quality difference becomes too great (i.e. greater than 1), the buyers finally start to buy from the high quality 1level sellers. This case is very similar to the ones with 0-level buyers and 0 and 1-level sellers and we can start to discern a recurring pattern. In this case, however, it is much more computationally expensive to maintain 2-level models. On the other hand, since these 2-level models are perfect, they are better predictors than the 1-level, which explains why the 2-level seller wins much more than the 1-level seller from Section 6.3.
Buyers Sellers
Lessons 0-level 0-level Equilibrium reached only when all sellers offer the same quality. Otherwise, we get oscillations. Mean price increases when quality offered decreases. 0-level Any Sellers have big incentives to lower quality/cost. 0-level 0-level and 1-level seller beats others. one 1-level Quantitative advantage of being 1-level predicted by volatility and price distribution. 0-level 0-level and 1-level sellers do better, as long as there many 1-level are not too many of them. 1-level 0-level and Buyers have upper hand. They buy from the most one 1-level preferred seller. 1-level sellers are usually at a disadvantage. 1-level 1-level and Since 2-level has perfect models, it wins an one 2-level overwhelming percentage of time, except when it offers a rather lower quality. Table 2: Summary of lessons. In all cases the buyers had identical value and quality assessment functions. Sellers were constrained to always return the same quality.
Conclusions
We have presented a framework for the development of agents with incremental modeling/learning capabilities, in an economic society of agents. These agents were built, and the execution of different agent populations leads us to the discovery of the lessons summarized in Table 2. The discovery of volatility and price distributions as predictors of the benefits of deeper models will be very useful as guides for deciding how much modeling capability to build into an agent. This decision could either be done prior to development or, given enough information, it could be done at runtime. We are also encouraged by the fact that increasing the agents' capabilities changes the system in ways that we can recognize from our everyday economic experience. Some of the agent structures shown in this paper are already being implemented into the UMDL (Atkins et al., 1996). We have a basic economic infrastructure that allows agents to engage in commerce, and the agents use customizable heuristics for determining their strategic behavior. We are working on incorporating the more advanced modeling capabilities into our agents in order to enable more interesting strategic behaviors.
Our results showed how sellers with deeper models fare better, in general, even when they produce less valuable goods. This means that we should expect those types of agents to, eventually, be added into the UMDL 6 . Fortunately, this advantage is diminished by having buyers keep deeper models. We expect that there will be a level at which the gains and costs associated with keeping deeper models balance out for each agent. Our hope is to provide a mechanism for agents to dynamically determine this cutoff and constantly adjust their behavior to maximize their expected profits given the current system behavior. The lessons in this paper are a significant step in this direction. We have seen that one needs to look at price volatility and at the modeling levels of the other agents to determine what modeling level will give the highest profits. We have also learned how buyers and sellers of different levels and offering different qualities lead to different system dynamics which, in turn, dictate whether the learning of nested models is useful or not.
We are considering the expansion of the model with the possible additions of agents that can both buy and sell, and sellers that can return different quality goods. Allowing sellers to change the quality returned to fit the buyer will make them more competitive against 1-level buyers. We are also continuing tests on many different types of agent populations in the hopes of getting a better understanding of how well different agents fare in the different populations.
In the long run, another offshoot of this research could be a better characterization of the types of environments and how they allow/inhibit "cheating" behavior in different agent populations. That is, we saw how, in our economic model, agents are sometimes rewarded for behavior that does not seem to be good for the community as a whole (e.g. when some of the sellers raised their price while lowering the quality they offered). The rewards, we are finding, start to diminish as the other agents become "smarter". We can intuit that the agents in these systems will eventually settle on some level of nesting that balances their costs of keeping nested models with their gains from taking better actions (Kauffman, 1994). It would be very useful to characterize the environments, agent populations, and types of "equilibria" that these might lead to, especially as interest in multi-agent systems grows. | 7,648 |
1903.05238 | 2963943458 | Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environments. Resulting grasps are visually realistic because hand is automatically fitted to the object shape from a position and orientation determined by the user using the VR handheld controllers (e.g. Oculus Touch motion controllers). Our approach is flexible because it can be adapted to different hand meshes (e.g. human or robotic hands) and it is also easily customizable. Moreover, it enables interaction with different objects regardless their geometries. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On one hand, qualitative evaluation was used in the assessment of abstract aspects, such as motor control, finger movement realism, and interaction realism. On the other hand, for the quantitative evaluation a novel metric has been proposed to visually analyze the performed grips. Performance analysis results indicate that previous experience with our grasping system is not a prerequisite for an enjoyable, natural and intuitive VR interaction experience. | Grasping action is the most basic component of any interaction and it is composed of three major components @cite_21 . The first one is related to the process of approaching the arm and hand to the target object, considering the overall body movement. The second component focuses on the hand and body pre-shaping before the grasping action. Finally, the last component fits the hand to the geometry of the object by closing each of the fingers until contact is established. | {
"abstract": [
"Abstract This paper addresses the important issue of automating grasping movement in the animation of virtual actors, and presents a methodology and algorithm to generate realistic looking grasping motion of arbitrary shaped objects. A hybrid approach using both forward and inverse kinematics is proposed. A database of predefined body postures and hand trajectories are generalized to adapt to a specific grasp. The reachable space is divided into small subvolumes, which enables the construction of the database. The paper also addresses some common problems of articulated figure animation. A new approach for body positioning with kinematic constraints on both hands is described. An efficient and accurate manipulation of joint constraints is also presented. Finally, we describe an interpolation algorithm which interpolates between two postures of an articulated figure by moving the end effector along a specific trajectory and maintaining all the joint angles in the feasible range. Results are quite satisfactory, and some are shown in the paper."
],
"cite_N": [
"@cite_21"
],
"mid": [
"1999329153"
]
} | A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Environments | W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projects are using virtual environments for different purposes. Most of VR applications are related to the entertainment industry (i.e. games and 3D cinema) or architectural visualizations, where virtual scene realism is a cornerstone. Currently existing VR systems are limited by their resolution, field-of-view, frame rate, and interaction among other technical specifications. In order to enhance user VR experience, developers are also focused on implementing rich interactions with the virtual environment, allowing the user to explore, interact and manipulate scene objects as in the real world.
Interaction is a crucial feature for training/simulation applications (e.g. flight, driving and medical simulators), and also teleoperation (e.g. robotics), where the user ability to interact and explore the simulated environments is paramount for achieving an immersive experience. For this purpose, most of VR devices come with a pair of handheld controllers which are fully tracked in 3D space and specifically designed for interaction. One of the most basic interaction tasks is object grasping and manipulation. In order to achieve an enjoyable experience in VR, a realistic, flexible and real-time grasping system is needed. However,
β’ Sergiu Oprea, Pablo M. Gonzalez, Alberto G. Garcia grasp synthesis in manipulation tasks is not straightforward because of the unlimited number of different hand configurations, the variety of object types and their geometries, and also due to the selection of the most suitable grasp for every different object in terms of realism, kinematics and physics. Currently existing real-time approaches in VR are purely animation-driven, completely relying on the animations realism. Moreover, these approaches are constrained to a limited number of simple object geometries and unable to deal with unknown objects. For every different object type and geometry, predefined animations are needed. This fact hinders the user experience, limiting its interaction capabilities. For a complete immersion user should be able to interact and manipulate different virtual objects as in the real world.
In this paper, we propose a real-time grasping system for object interaction in virtual reality environments. We aim to achieve natural and visually plausible interactions in photorealistic environments rendered by Unreal Engine. Taking advantage of headset tracking and motion controllers, a human operator can be embodied in such environments as a virtual human or robot agent to freely navigate and interact with objects. Our grasping system is able to deal with different object geometries, without the need of a predefined grasp animation for each. With our approach, fingers are automatically fitted to object shape and geometry. We constrain hand finger phalanges motion checking in realtime for collisions with the object geometry.
Our grasping system was analyzed both qualitatively and quantitatively. On one side, for the qualitative analysis, grasping system was implemented in a photorealistic envi-arXiv:1903.05238v1 [cs.GR] 12 Mar 2019 ronment where the user is freely able to interact with real world objects extracted from the YCB dataset [1] (see Figure 1). The qualitative evaluation is based on a questionnaire that will address the user interaction experience in terms of realism during object manipulation and interaction, system flexibility and usability, and general VR experience. On the other side, a quantitative grasping system analysis was carried out, contrasting the elapsed time a user needs in grasping an object and grasp quality based on a novel error metric which quantifies the overlapping between hand fingers and grasped object.
From the quantitative evaluation, we obtain individual errors for the last two phalanges of each finger, the time user needed to grasp the object and also the contact points. This information alongside other provided by UnrealROX [2] such as depth mpas, instance segmentations, normal maps, 3D bounding boxes and 6D object pose (see Figure 8), enables different robotic applications as described in Section 6.
In summary, we make the three following contributions:
β’ We propose a real-time, realistic looking and flexible grasping system for natural interaction with arbitrary shaped objects in virtual reality environments;
β’ We propose a novel metric and procedure to analyze visual grasp quality in VR interactions quantifying hand-object overlapping;
β’ We provide the contact points extracted during the interaction in both local and global system coordinates.
The rest of the paper is structured as follows. First of all, Section 2 analyzes the latest works related to object interaction and manipulation in virtual environments. The core of this work is comprised in Section 3 where our approach is described in detail. Then, the performance analysis, with the qualitative and our novel quantitative evaluations, is discussed in Section 4. Analysis results are reported in Section 5. Then, several applications are discussed in Section 6. After that, limitations of our approach are covered in Section 7 alongside several feature works. Finally, some conclusions are drawn in the last Section 8.
Data-driven approaches
Grasping data-driven approaches have existed since a long time ago [3]. These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types.
From this database, grasp poses are selected according with given object shape and geometry [6] [7]. Li et al. [6] construct a database with different hand poses and also object shapes and sizes. Despite having a good database, the process of hand poses selection is not straightforward since there can be multiple equally valid possibilities for the same gesture. To address this problem, Li et al. [6] proposed a shape-matching algorithm which returns multiple potential grasp poses.
The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) [8] [9]. For the same purpose, Jorg et al. [10] studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features.
Hybrid data-driven approaches
In order to achieve realistic object interactions, physical simulations on the objects should also be considered [11] [12] [13]. Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid [14]. Pollard et al. [11] simulate hand interaction, such as two hands grasping each other in the handshake gesture. Bai et al. [13] simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu [15] can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu [14] reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depthfirst tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities.
Virtual reality approaches
This section is limited to virtual reality interaction using VR motion controllers, avoiding glove-based and bare-hand approaches. Implementation of the aforementioned techniques in virtual reality environments is a difficult task cause optimizations are needed to keep processes running in real time. Most of current existing approaches for flexible and realistic grasping are not suitable for real-time interaction. VR developers aim to create fast solutions with realistic and natural interactions.
Recent approaches are directly related to the entertainment industry, i.e. video games. An excellent example is Lone Echo, a narrative adventure game which consists of manipulating tools and objects for solving puzzles. Hand animations are mostly procedurally generated, enabling grasping of complex geometries regardless their grasp angle. This approach [16] is based on a graph traversal heuristic which searches intersections between hand fingers and object surface mesh triangles. A* heuristic find the intersection that is nearest to the palm and also avoid invalid intersections. After calculating angles to make contact with each intersection point, highest angle is selected and fingers are rotated accordingly.
Mostly implemented solutions in VR are animationbased [17] [18] [19]. These approaches are constrained to a limited number of simple object geometries and are unable to deal with unknown objects. Movements are predefined for concrete object geometries, hindering user interaction capabilities in the virtual environment. In [17], distance grab selection technique is implemented to enhance the user comfort when interacting in small play areas, while sitting or for grabbing objects on the floor. Grasping system is based on three trigger volumes attached to each hand: two small cylinders for short-range grasp, and a cone for long-range grabbing. Based on this approach, we used trigger volumes attached to finger phalanges to control its movement and detect object collisions more precisely. In this way we achieve a more flexible and visually plausible grasping system enhancing immersion and realism during interactions.
GRASPING SYSTEM
With the latest advances in rendering techniques, visualization of virtual reality (VR) environments is increasingly more photorealistic. Besides graphics, which are the cornerstone of most VR solutions, interaction is also an essential part to enhance the user experience and immersion. VR scene content is portrayed in a physically tangible way, inviting users to explore the environment, and interact or manipulate represented objects as in the real world. VR devices aim to provide very congruent means of primary interaction, described as a pair of handheld devices with very accurate 6D one-to-one tracking. The main purpose is to create rich interactions producing memorable and satisfying VR experiences.
Most of the currently available VR solutions and games lack of a robust and natural object manipulation and interaction capabilities. This is because, bringing natural and intuitive interactions to VR is not straightforward, which makes VR development challenging at this stage. Interactions need to be in real-time and maintaining a high and solid frame rate, directly mapping user movement to VR input in order to avoid VR sickness (visual and vestibular mismatch). Maintaining the desired 90 frames per second (FPS) in a photorealistic scene alongside complex interactions is not straightforward. This indicates the need of a flexible grasping system designed to naturally and intuitively manipulate unknown objects of different geometries in real-time.
Overview
Our grasping approach was designed for real-time interaction and manipulation in virtual reality environments by providing a simple, modular, flexible, robust, and visually realistic grasping system. Its main features are described as follows:
β’ Simple and modular: it can be easily integrated with other hand configurations. Its design is modular and adaptable to different hand skeletals and models.
β’ Flexible: most of the currently available VR grasp solutions are purely animation-driven, thus limited to known geometries and unable to deal with previously unseen objects. In contrast, our grasping system is flexible as it allows interaction with unknown objects. In this way, the user can freely decide the object to interact with, without any restrictions.
β’ Robust: unknown objects can have different geometries. However, our approach is able to adapt the virtual hand to objects, regardless of their shape.
β’ Visually realistic: grasping action is fully controlled by the user, taking advantage of its previous experience and knowledge in grasping daily common realistic objects such as cans, cereal boxes, fruits, tools, etc. This makes resulting grasping visually realistic and natural just as a human would in real life.
The combination of the above described features makes VR interaction a pleasant user experience, where object manipulation is smooth and intuitive.
Our grasping works by detecting collisions with objects through the use of trigger actors placed experimentally on the finger phalanges. A trigger actor is a component from Unreal Engine 4 used for casting an event in response to an interaction, e.g. collision with another object. These components can be of different shapes, such as capsule, box, sphere, etc. In the Figure 2 capsule triggers are represented in green and palm sphere trigger in red. We experimentally placed two capsule triggers on the last two phalanges of each finger. We noticed that this configuration is the most effective in detecting objects collisions. Notice that collision detection is performed for each frame, so, for heavy configurations with many triggers, performance would be harmed.
Components
Our grasping system is composed of the components represented in the Figure 3. These components are defined as follows:
β’ Object selection: selects the nearest object to the hand palm. Detection area is determined by the sphere Figure 2). The sphere trigger returns the world location of all the overlapped actors. As a result, the nearest actor can be determined by computing the distance from each overlapped actor to the center of the sphere trigger. Smallest distance will determine the nearest object, saving its reference for the other components.
β’ Interaction manager: manages capsule triggers which are attached to finger phalanges as represented in Figure 2. If a capsule trigger reports an overlap event, the movement of its corresponding phalanx is blocked until hand is reopened or the overlapping with the manipulated object is over. The phalanx state (blocked or in movement) will be used as input to the grasping logic component. A phalanx is blocked if there is an overlap of the its corresponding capsule trigger with the manipulated object.
β’ Finger movement: this component determines the movement of the fingers during the hand closing and opening animations. It ensures a smooth animation avoiding unexpected and unrealistic behavior in finger movement caused neither by a performance drop or other interaction issues. Basically, it monitors each variation in the rotation value of the phalanx. In the case of detecting an unexpected variation (i.e. big variation) during a frame change, missing intermediate values will be interpolated so as to keep finger movement smooth.
β’ Grasping logic: this component manages when to grab or release an object. This decision is made based on the currently blocked phalanges determined with the interaction manager component. The object is grasped or released based on the following function:
f (x) = true, if (th ph β¨ palm) β§ (in ph β¨ mi ph ) f alse, otherwise(1)
, where x = (th ph , in ph , mi ph , palm) is defined as
th ph = thumb mid β¨ thumb dist in ph = index mid β¨ index dist mi ph = middle mid β¨ middle dist(2)
Equation 1 determines when an object is grasped or released based on the inputs determined in Equation 2 where th ph , in ph , and mi ph , are the thumb, index and middle phalanges respectively. According to human hand morphology, mid and dist subscripts refer to the middle and distal phalanx (e.g. thumb dist references the distal phalanx of thumb finger and at the implementation level it is a boolean value).
Implementation details
Grasping system has been originally implemented in Unreal Engine 4 (UE4), however, it can be easily implemented in other engines such as Unity, which would also provide us with the necessary components for replicating the system (e.g. overlapping triggers). The implementation consists of UE4 blueprints and has been correctly structured in the components depicted in Figure 3 and described in the previous section. Implementation is available at Github 1 .
PERFORMANCE ANALYSIS
In order to validate our proposal, a complete performance analysis has been carried out. This analysis covers from a qualitative evaluation, which is prevalent in the assessment of VR systems, to a novel quantitative evaluation. Evaluation methods are briefly described as follows:
β’ Qualitative evaluation: based on the user experience interacting with real objects from the YCB dataset in a photorealistic indoor scenario. Its purpose is to assess interaction realism, immersion, hand movement naturalness and other qualitative aspects described in Table 1 from the Subsection 4.1, which addresses qualitative evaluation in detail.
β’ Quantitative evaluation: based on the grasping quality in terms of realism (i.e. how much it is visually plausible). We consider a visually plausible grasp when hand palm or fingers are level with the object surface, as in a real life grasping. However, when dealing with complex meshes, the collision detection precision can be significantly influenced. In this case, fingers could penetrate the object surface, or remain above its surface when a collision was detected earlier than expected. This would result in an unnatural and unrealistic grasp. To visually quantify grasping quality, we purpose a novel error metric based on computing the distance from each capsule trigger to the nearest contact point on the object surface. Quantitative evaluation and the proposed error metric are addressed in detail in Subsection 4.2.
Qualitative evaluation
Most VR experiments include qualitative and quantitative studies to measure its realism and immersion. Arguably, questionnaires are the default method to qualitatively assess any experience and the vast majority of works include them in one way or another [20] [21] [22]. However, one of the main problems with them is the absence of a standardized set of questions for different experiences that allows for 1. https://github.com/3dperceptionlab/unrealgrasp fair and easy comparisons. The different nature of the VR systems and experiences makes it challenging to find a set of evaluation questions that fits them all. Following the efforts of [23] towards a standardized embodiment questionnaire, we analyzed several works in the literature [24] [25] that included questionnaires to assess VR experiences to devise a standard one for virtual grasping systems. Inspired by such works, we have identified three main types of questions or aspects:
β’ Motor Control: this aspect considers the movement of the virtual hands as a whole and its responsiveness to the virtual reality controllers. Hands should move naturally and their movements must be caused exactly by the controllers without unwanted movements and without limiting or restricting real movements to adapt to the virtual ones.
β’ Finger Movement: this aspect takes the specific finger movement into account. Such movements must be natural and plausibly. Moreover, they must react properly to the user's intent.
β’ Interaction Realism: this aspect is related to the interaction of the hand and fingers with objects.
The questionnaire, shown in Table 1, is composed of fourteen questions related to the previously described aspects. Following [23], the users of the study will be pre- It seemed as if the virtual fingers were mine when grabbing an object Q10 I felt that grabbing objects was clumsy and hard to achieve Q11 It seemed as if finger movement were guided and unnatural Q12 I felt that grasps were visually correct and natural Q13 I felt that grasps were physically correct and natural Q14 It seemed that fingers were adapting properly to the different geometries sented with such questions right after the end of the experience in a randomized order to limit context effects. In addition, questions must be answered following the 7-point Likert-scale: (+3) strongly agree, (+2) agree, (+1) somewhat agree, (0) neutral, (-1) somewhat disagree, (-2) disagree, and (-3) strongly disagree. Results will be presented as a single embodiment score using the following equations:
Motor Control = ((Q1 + Q2) β (Q3 + Q4))/4 Finger Movement Realism = (Q5 + Q6 + Q7)/3 Interaction Realism = ((Q8 + Q9) β (Q10 + Q11) + Q12 + Q13 + Q14)/7(3)
, using the results of each individual aspect, we obtain the total embodiment score as follows: Score = (Motor Control + Finger Movement Realism + Interaction Realism * 2)/4
The interaction realism is the key aspect of this qualitative evaluation. So that, in the Equation 4 we emphasize this aspect by weighting it higher.
Quantitative evaluation
With the quantitative evaluation, we aim to evaluate grasping quality in terms of how much it is visually plausible or realistic. In other words, our purpose is to visually quantify our grasping performance, analyzing each finger position and how it fits the object mesh. When a collision is detected by a capsule trigger, we proceed with the calculation of the nearest distance between the finger phalanx surface (delimited by the capsule trigger) and the object mesh (see Equation 8).
In Figure 4 the red capsules are representing 3D sphere tracing volumes which provide information of the nearest collision from the trace starting point to the first contact point on the object surface which intersects the sphere volume. For each finger phalanx with an attached capsule trigger represented in green, we throw a sphere trace obtaining the nearest contact points on the object surface represented as lime colored dots (impact point, Ip). In this representation, the total error for the index finger would be the average of the sum of the distances in millimeters between the surface of each phalanx and the nearest contact point on the object surface (see Equation 9). The nearest distance computation is approximated by an equation that was developed to find the distance between the impact point, and the plane that contains the capsule trigger center point and is perpendicular to the longitudinal axis of the red capsule. Capsule triggers centers are located on the surface of the hand mesh, so this computation should approximate the nearest distance to the mesh well enough, without being computationally too demanding. To compute this distance, we define the following vectors from the three input points (the starting point of the red capsule, the impact point and the capsule trigger center point):
β β β D Ip = Ip β Sp β ββ β D CT c = CT c β Sp(5)
where β β β D Ip is the vector from the starting point to the impact point, and β ββ β D CT c vector represents the direction of the longitudinal axis of the red capsule. They are represented in navy blue and purple respectively in Figure 4. Then, we find the cosine of the angle they form through their dot product:
β β β D Ip Β· β ββ β D CT c = | β β β D Ip | * | β ββ β D CT c | * cos(Ξ²) cos(Ξ²) = β β β D Ip Β· β ββ β D CT c | β β β D Ip | * | β ββ β D CT c |(6)
We can now substitute that cosine when computing the projection of β β β D Ip over the longitudinal axis of the red capsule ( β β β D P r in Figure 4):
| β β β D P r | = cos(Ξ²) * | β β β D Ip | | β β β D P r | = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c | * | β β β D Ip | * | β β β D Ip | | β β β D P r | = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c |(7)
Having that module, we only have to subtract | β ββ β D CT c | in order to obtain the desired distance:
N D(Ip, Sp, CT c) = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c | β | β ββ β D CT c | N D(Ip, Sp, CT c) = β ββββ β Ip β Sp Β· β ββββββ β CT c β Sp | β ββββββ β CT c β Sp| β | β ββββββ β CT c β Sp|(8)
Computing the distance like this, with this final subtraction, allows to obtain a positive distance when impact point is outside the hand mesh, and a negative one if it is inside. We compute the nearest distance per each capsule trigger attached to a finger phalanx. As stated before, if the distance is negative, this indicates a finger penetration issue on the object surface. Otherwise, if distance is positive, it means that finger stopped above the object surface. The ideal case is when a zero distance is obtained, that is, the finger is perfectly situated on the object surface.
The total error for the hand is represented by the following equation:
HandError = N F ingers i=1 N CT F j=1 |N D(Ip ij , Sp ij , CT c ij )| N CapsuleT riggersP erF inger(9)
Dataset
To benchmark our grasping system we used a set of objects that are frequently used in daily life, such as, food items (e.g. cracker box, cans, box of sugar, fruits, etc.), tool items (e.g. power drill, hammer, screwdrivers, etc.), kitchen items (e.g. eating utensils) and also spherical shaped objects (e.g. tennis ball, racquetball, golf ball, etc.). Yale-CMU-Berkeley (YCB) Object and Model set [1] provides us these reallife 3D textured models scanned with outstanding accuracy and detail. Available objects have a wide variety of shapes, textures and sizes as we can see in Figure 5. The advantage of using real life objects is that the user already has a previous experience manipulating similar objects so he will try to grab and interact with the objects in the same way.
Participants
For the performance analysis, we recruited ten participants (8M/2F) from the local campus. Four of them have experience with VR applications. The rest are inexperienced virtual reality users. Participants will take part on both qualitative and quantitative evaluation. The performance analysis procedure will be described in the following subsection, indicating the concrete tasks to be performed by each participant.
Procedure
The system performance analysis begins with the quantitative evaluation. In this first phase, the user will be embodied in a controlled scenario 2 where 30 different objects will be spawned in a delimited area, with random orientation, and in the same order as represented in Figure 5. The user will try to grasp the object as he would do in real life and as quickly as possible. For each grasping, the system will compute the error metric and will also store the time spent by the user in grasping the object. The purpose of this first phase is to visually analyze grasping quality which is directly related to user expertise in VR environments and concretely with our grasping system. An experienced user would know system limits both when interacting with complex geometries or with large objects that would make it difficult to perform the grasp action quickly and naturally. For the qualitative evaluation, the same user will be embodied in a photorealistic scenario changing mannequin hands by human hand model with realistic textures. After interacting freely in the photorealistic virtual environment 3 , the user will have to answer the evaluation questionnaire defined in Table 1. The main purpose is the evaluation of interaction realism, finger and hand movement naturalness and motor control, among other qualitative aspects regarding user experience in VR environments.
RESULTS AND DISCUSSION
In this section we will discuss and analyze the results obtained from the performance analysis process. On the one hand, we will draw conclusions from the average error obtained in grasping each object by each participant group, and also from the overall error per object taking into account all the participants (see Figure 7). On the other hand, we obtained the average elapsed time needed in grasping each object for each participant group, and also the average elapsed time needed for each object taking into account all the participants (see Figure 6). This will allow us to draw conclusions about the most difficult objects to manipulate in terms of accuracy and elapsed time for grasping. Moreover, we can compare system performance used by inexperienced users in comparison with experienced ones.
Qualitative evaluation
Qualitative evaluation for each participant was calculated using the Equation 3 obtaining a score for each qualitative 2. https://youtu.be/4sPhLbHpywM 3. https://youtu.be/65gdFdwsTVg aspect. In Table 2 we represent for each group of participants: the average score for each evaluation aspect and the total embodiment score computed using the Equation 4. Regarding the represented results in Table 2 of experienced users has been more disadvantageous as they have a more elaborated criterion given their previous experience with virtual reality applications. Finger movement realism (aspect 2) was evaluated similarly by both groups. This is because the hand closing and opening gestures are guided by the same animation in both cases. Finally, the reported results referring to the interaction realism have been the lowest in both cases. This is mostly because users cannot control their individual fingers movement, since general hand gesture is controlled by a unique trigger button of the controller. However, overall obtained embodiment score is 2.08 out of 3.0.
Quantitative evaluation
As expected, inexperienced users have taken longer to grasp almost all the object set due to the lack of practice and expertise with the system. This is clearly represented in Figure 6 where experienced users only have taken longer in grasping some tools such as, the flat screwdriver ( Figure 5z) and hammer (Figure 5aa). Inexperienced users take an average of 0.36 seconds longer to grab the objects. In practice and regarding interaction, this is not a factor that is going to make a crucial difference. Analyzing Figure 6, the tuna fish can (Figure 5f), potted meat can (Figure 5h), spatula (Figure 5u), toy airplane ( Figure 5ad) and bleach cleaner (Figure 5q) are the most time consuming when grasped by the users. This is mainly because of their sizes and complex geometries. Since objects are spawned with a random orientation, this fact can affect grasping times. Even so, we can conclude that the largest objects are those that the user takes the longest to grasp. Regarding Figure 7 we can observe that the errors obtained by both groups of participants are quite similar. Most significant differences were observed in the case of power drill (Figure 5v) and the spatula. The power drill has a complex geometry and its size also hinders its grasp as the same as spatula and toy airplane.
Analyzing the overall error in the Figure 7, we conclude that the largest objects, such as the toy airplane, power drill, and bleach cleaner are those reporting most error rate. In addition, we observe how overall error decreases from the first objects to the last ones. This is mainly because, user skills and expertise with the grasping system are improving progressively. Moreover, results refer to a steep learning curve.
APPLICATIONS
Our grasping system can be applied to several existing problems in different areas of interest, such as: robotics [26], rehabilitation [27] and interaction using augmented reality [28]. In robotics, different works have been explored to implement robust grasp approaches that allow robots to interact with the environment. These contributions are organized in mainly four different blocks [29]: methods that rely on known objects and previously estimated grasp points [30], grasping methods for familiar objects [31], methods for unknown objects based on the analysis of object geometry [32] and automatic learning approaches [33]. Our approach is more closely related to this last block, where its use would potentially be a relevant contribution. As a direct application, our system enables human-robot knowledge transfer where robots try to imitate human behaviour in performing grasping.
Our grasping system is also useful for rehabilitation of patients with hand motor difficulties, and it could even be done remotely assisted by an expert [34], or through an automatic system [35]. Several works have demonstrated the viability of patient rehabilitation in virtual environments [27], helping them to improve the mobility of their hands in daily tasks [36]. Our novel error metric in combination with other automatic learning methods, can be used to guide patients during rehabilitation with feedback information and instructions. This will make rehabilitation a more attractive process, by quantifying the patient progress and visualizing its improvements over the duration of rehabilitation.
Finally, our grasping system integrated in UnrealROX [2] enables many other computer vision and artificial intelligence applications by providing synthetic ground truth data, such as depth and normal maps, object masks, trajectories, stereo pairs, etc. of the virtual human hands interacting with real objects from the YCB dataset ( Figure 8). Hand movement is based on a single animation regardless object geometry. Depending on the object shape we could vary grasping gesture: sphericalgrasping, cylindrical-grasping, finger-pinch, keypinch, etc. However, our grasping gesture was experimentally the best when dealing with different shaped objects.
LIMITATIONS AND FUTURE WORKS
β’
The object can be grasped with only one hand. The user can interact with different objects using both hands at the same time. But not the same object with both hands.
β’ Sometimes it is difficult to deal with large objects due to the initial hand posture or because objects slide out from the hand palm due to physical collisions. Experienced users can better deal with this problem.
As future work, and in order to improve our grasping system, we could vary the hand grip gesture according to the object geometry we are manipulating. This is finding a correspondence between object geometry and a simple shape, e.g. a tennis ball is similar to a sphere thus proceeding with a spherical grasp movement. At the application level, there are several possibilities as we discussed in the previ-ous section. However, we would like to emphasize the use of contact points obtained when grasping an object in virtual reality, to transfer that knowledge and human behavior to real robots.
CONCLUSION
This work proposes a flexible and realistic looking grasping system which enables smooth and real-time interaction in virtual reality environments with arbitrary shaped objects. This approach is unconstrained by the object geometry, it is fully controlled by the user and it is modular and easily implemented on different meshes or skeletal configurations. In order to validate our approach, an exhaustive evaluation process was carried out. Our system was evaluated qualitatively and quantitatively by two groups of participants: with previous experience in virtual reality environments (experienced users) and without expertise in VR (inexperienced). For the quantitative evaluation, a new error metric has been proposed to evaluate each grasp, quantifying hand-object overlapping. From the performance analysis results, we conclude that user overall experience was satisfactory and positive. Analyzing the quantitative evaluation, the error difference between experienced users and non experienced is subtle. Moreover, average errors are progressively smaller as more object are grasped. This clearly indicates a steep learning curve. In addition, the qualitative analysis points to a natural and realistic interaction. Users can freely manipulate previously defined dynamic objects in the photorealistic environment. Moreover, grasping contact points can be easily extracted, thus enabling numerous applications, especially in the field of robotics. Unreal Engine 4 project source code is available at GitHub alongside several video demonstrations. This approach can easily be implemented on different game engines. | 5,795 |
1903.05238 | 2963943458 | Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environments. Resulting grasps are visually realistic because hand is automatically fitted to the object shape from a position and orientation determined by the user using the VR handheld controllers (e.g. Oculus Touch motion controllers). Our approach is flexible because it can be adapted to different hand meshes (e.g. human or robotic hands) and it is also easily customizable. Moreover, it enables interaction with different objects regardless their geometries. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On one hand, qualitative evaluation was used in the assessment of abstract aspects, such as motor control, finger movement realism, and interaction realism. On the other hand, for the quantitative evaluation a novel metric has been proposed to visually analyze the performed grips. Performance analysis results indicate that previous experience with our grasping system is not a prerequisite for an enjoyable, natural and intuitive VR interaction experience. | Grasping data-driven approaches have existed since a long time ago @cite_21 . These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types. | {
"abstract": [
"Abstract This paper addresses the important issue of automating grasping movement in the animation of virtual actors, and presents a methodology and algorithm to generate realistic looking grasping motion of arbitrary shaped objects. A hybrid approach using both forward and inverse kinematics is proposed. A database of predefined body postures and hand trajectories are generalized to adapt to a specific grasp. The reachable space is divided into small subvolumes, which enables the construction of the database. The paper also addresses some common problems of articulated figure animation. A new approach for body positioning with kinematic constraints on both hands is described. An efficient and accurate manipulation of joint constraints is also presented. Finally, we describe an interpolation algorithm which interpolates between two postures of an articulated figure by moving the end effector along a specific trajectory and maintaining all the joint angles in the feasible range. Results are quite satisfactory, and some are shown in the paper."
],
"cite_N": [
"@cite_21"
],
"mid": [
"1999329153"
]
} | A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Environments | W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projects are using virtual environments for different purposes. Most of VR applications are related to the entertainment industry (i.e. games and 3D cinema) or architectural visualizations, where virtual scene realism is a cornerstone. Currently existing VR systems are limited by their resolution, field-of-view, frame rate, and interaction among other technical specifications. In order to enhance user VR experience, developers are also focused on implementing rich interactions with the virtual environment, allowing the user to explore, interact and manipulate scene objects as in the real world.
Interaction is a crucial feature for training/simulation applications (e.g. flight, driving and medical simulators), and also teleoperation (e.g. robotics), where the user ability to interact and explore the simulated environments is paramount for achieving an immersive experience. For this purpose, most of VR devices come with a pair of handheld controllers which are fully tracked in 3D space and specifically designed for interaction. One of the most basic interaction tasks is object grasping and manipulation. In order to achieve an enjoyable experience in VR, a realistic, flexible and real-time grasping system is needed. However,
β’ Sergiu Oprea, Pablo M. Gonzalez, Alberto G. Garcia grasp synthesis in manipulation tasks is not straightforward because of the unlimited number of different hand configurations, the variety of object types and their geometries, and also due to the selection of the most suitable grasp for every different object in terms of realism, kinematics and physics. Currently existing real-time approaches in VR are purely animation-driven, completely relying on the animations realism. Moreover, these approaches are constrained to a limited number of simple object geometries and unable to deal with unknown objects. For every different object type and geometry, predefined animations are needed. This fact hinders the user experience, limiting its interaction capabilities. For a complete immersion user should be able to interact and manipulate different virtual objects as in the real world.
In this paper, we propose a real-time grasping system for object interaction in virtual reality environments. We aim to achieve natural and visually plausible interactions in photorealistic environments rendered by Unreal Engine. Taking advantage of headset tracking and motion controllers, a human operator can be embodied in such environments as a virtual human or robot agent to freely navigate and interact with objects. Our grasping system is able to deal with different object geometries, without the need of a predefined grasp animation for each. With our approach, fingers are automatically fitted to object shape and geometry. We constrain hand finger phalanges motion checking in realtime for collisions with the object geometry.
Our grasping system was analyzed both qualitatively and quantitatively. On one side, for the qualitative analysis, grasping system was implemented in a photorealistic envi-arXiv:1903.05238v1 [cs.GR] 12 Mar 2019 ronment where the user is freely able to interact with real world objects extracted from the YCB dataset [1] (see Figure 1). The qualitative evaluation is based on a questionnaire that will address the user interaction experience in terms of realism during object manipulation and interaction, system flexibility and usability, and general VR experience. On the other side, a quantitative grasping system analysis was carried out, contrasting the elapsed time a user needs in grasping an object and grasp quality based on a novel error metric which quantifies the overlapping between hand fingers and grasped object.
From the quantitative evaluation, we obtain individual errors for the last two phalanges of each finger, the time user needed to grasp the object and also the contact points. This information alongside other provided by UnrealROX [2] such as depth mpas, instance segmentations, normal maps, 3D bounding boxes and 6D object pose (see Figure 8), enables different robotic applications as described in Section 6.
In summary, we make the three following contributions:
β’ We propose a real-time, realistic looking and flexible grasping system for natural interaction with arbitrary shaped objects in virtual reality environments;
β’ We propose a novel metric and procedure to analyze visual grasp quality in VR interactions quantifying hand-object overlapping;
β’ We provide the contact points extracted during the interaction in both local and global system coordinates.
The rest of the paper is structured as follows. First of all, Section 2 analyzes the latest works related to object interaction and manipulation in virtual environments. The core of this work is comprised in Section 3 where our approach is described in detail. Then, the performance analysis, with the qualitative and our novel quantitative evaluations, is discussed in Section 4. Analysis results are reported in Section 5. Then, several applications are discussed in Section 6. After that, limitations of our approach are covered in Section 7 alongside several feature works. Finally, some conclusions are drawn in the last Section 8.
Data-driven approaches
Grasping data-driven approaches have existed since a long time ago [3]. These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types.
From this database, grasp poses are selected according with given object shape and geometry [6] [7]. Li et al. [6] construct a database with different hand poses and also object shapes and sizes. Despite having a good database, the process of hand poses selection is not straightforward since there can be multiple equally valid possibilities for the same gesture. To address this problem, Li et al. [6] proposed a shape-matching algorithm which returns multiple potential grasp poses.
The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) [8] [9]. For the same purpose, Jorg et al. [10] studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features.
Hybrid data-driven approaches
In order to achieve realistic object interactions, physical simulations on the objects should also be considered [11] [12] [13]. Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid [14]. Pollard et al. [11] simulate hand interaction, such as two hands grasping each other in the handshake gesture. Bai et al. [13] simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu [15] can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu [14] reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depthfirst tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities.
Virtual reality approaches
This section is limited to virtual reality interaction using VR motion controllers, avoiding glove-based and bare-hand approaches. Implementation of the aforementioned techniques in virtual reality environments is a difficult task cause optimizations are needed to keep processes running in real time. Most of current existing approaches for flexible and realistic grasping are not suitable for real-time interaction. VR developers aim to create fast solutions with realistic and natural interactions.
Recent approaches are directly related to the entertainment industry, i.e. video games. An excellent example is Lone Echo, a narrative adventure game which consists of manipulating tools and objects for solving puzzles. Hand animations are mostly procedurally generated, enabling grasping of complex geometries regardless their grasp angle. This approach [16] is based on a graph traversal heuristic which searches intersections between hand fingers and object surface mesh triangles. A* heuristic find the intersection that is nearest to the palm and also avoid invalid intersections. After calculating angles to make contact with each intersection point, highest angle is selected and fingers are rotated accordingly.
Mostly implemented solutions in VR are animationbased [17] [18] [19]. These approaches are constrained to a limited number of simple object geometries and are unable to deal with unknown objects. Movements are predefined for concrete object geometries, hindering user interaction capabilities in the virtual environment. In [17], distance grab selection technique is implemented to enhance the user comfort when interacting in small play areas, while sitting or for grabbing objects on the floor. Grasping system is based on three trigger volumes attached to each hand: two small cylinders for short-range grasp, and a cone for long-range grabbing. Based on this approach, we used trigger volumes attached to finger phalanges to control its movement and detect object collisions more precisely. In this way we achieve a more flexible and visually plausible grasping system enhancing immersion and realism during interactions.
GRASPING SYSTEM
With the latest advances in rendering techniques, visualization of virtual reality (VR) environments is increasingly more photorealistic. Besides graphics, which are the cornerstone of most VR solutions, interaction is also an essential part to enhance the user experience and immersion. VR scene content is portrayed in a physically tangible way, inviting users to explore the environment, and interact or manipulate represented objects as in the real world. VR devices aim to provide very congruent means of primary interaction, described as a pair of handheld devices with very accurate 6D one-to-one tracking. The main purpose is to create rich interactions producing memorable and satisfying VR experiences.
Most of the currently available VR solutions and games lack of a robust and natural object manipulation and interaction capabilities. This is because, bringing natural and intuitive interactions to VR is not straightforward, which makes VR development challenging at this stage. Interactions need to be in real-time and maintaining a high and solid frame rate, directly mapping user movement to VR input in order to avoid VR sickness (visual and vestibular mismatch). Maintaining the desired 90 frames per second (FPS) in a photorealistic scene alongside complex interactions is not straightforward. This indicates the need of a flexible grasping system designed to naturally and intuitively manipulate unknown objects of different geometries in real-time.
Overview
Our grasping approach was designed for real-time interaction and manipulation in virtual reality environments by providing a simple, modular, flexible, robust, and visually realistic grasping system. Its main features are described as follows:
β’ Simple and modular: it can be easily integrated with other hand configurations. Its design is modular and adaptable to different hand skeletals and models.
β’ Flexible: most of the currently available VR grasp solutions are purely animation-driven, thus limited to known geometries and unable to deal with previously unseen objects. In contrast, our grasping system is flexible as it allows interaction with unknown objects. In this way, the user can freely decide the object to interact with, without any restrictions.
β’ Robust: unknown objects can have different geometries. However, our approach is able to adapt the virtual hand to objects, regardless of their shape.
β’ Visually realistic: grasping action is fully controlled by the user, taking advantage of its previous experience and knowledge in grasping daily common realistic objects such as cans, cereal boxes, fruits, tools, etc. This makes resulting grasping visually realistic and natural just as a human would in real life.
The combination of the above described features makes VR interaction a pleasant user experience, where object manipulation is smooth and intuitive.
Our grasping works by detecting collisions with objects through the use of trigger actors placed experimentally on the finger phalanges. A trigger actor is a component from Unreal Engine 4 used for casting an event in response to an interaction, e.g. collision with another object. These components can be of different shapes, such as capsule, box, sphere, etc. In the Figure 2 capsule triggers are represented in green and palm sphere trigger in red. We experimentally placed two capsule triggers on the last two phalanges of each finger. We noticed that this configuration is the most effective in detecting objects collisions. Notice that collision detection is performed for each frame, so, for heavy configurations with many triggers, performance would be harmed.
Components
Our grasping system is composed of the components represented in the Figure 3. These components are defined as follows:
β’ Object selection: selects the nearest object to the hand palm. Detection area is determined by the sphere Figure 2). The sphere trigger returns the world location of all the overlapped actors. As a result, the nearest actor can be determined by computing the distance from each overlapped actor to the center of the sphere trigger. Smallest distance will determine the nearest object, saving its reference for the other components.
β’ Interaction manager: manages capsule triggers which are attached to finger phalanges as represented in Figure 2. If a capsule trigger reports an overlap event, the movement of its corresponding phalanx is blocked until hand is reopened or the overlapping with the manipulated object is over. The phalanx state (blocked or in movement) will be used as input to the grasping logic component. A phalanx is blocked if there is an overlap of the its corresponding capsule trigger with the manipulated object.
β’ Finger movement: this component determines the movement of the fingers during the hand closing and opening animations. It ensures a smooth animation avoiding unexpected and unrealistic behavior in finger movement caused neither by a performance drop or other interaction issues. Basically, it monitors each variation in the rotation value of the phalanx. In the case of detecting an unexpected variation (i.e. big variation) during a frame change, missing intermediate values will be interpolated so as to keep finger movement smooth.
β’ Grasping logic: this component manages when to grab or release an object. This decision is made based on the currently blocked phalanges determined with the interaction manager component. The object is grasped or released based on the following function:
f (x) = true, if (th ph β¨ palm) β§ (in ph β¨ mi ph ) f alse, otherwise(1)
, where x = (th ph , in ph , mi ph , palm) is defined as
th ph = thumb mid β¨ thumb dist in ph = index mid β¨ index dist mi ph = middle mid β¨ middle dist(2)
Equation 1 determines when an object is grasped or released based on the inputs determined in Equation 2 where th ph , in ph , and mi ph , are the thumb, index and middle phalanges respectively. According to human hand morphology, mid and dist subscripts refer to the middle and distal phalanx (e.g. thumb dist references the distal phalanx of thumb finger and at the implementation level it is a boolean value).
Implementation details
Grasping system has been originally implemented in Unreal Engine 4 (UE4), however, it can be easily implemented in other engines such as Unity, which would also provide us with the necessary components for replicating the system (e.g. overlapping triggers). The implementation consists of UE4 blueprints and has been correctly structured in the components depicted in Figure 3 and described in the previous section. Implementation is available at Github 1 .
PERFORMANCE ANALYSIS
In order to validate our proposal, a complete performance analysis has been carried out. This analysis covers from a qualitative evaluation, which is prevalent in the assessment of VR systems, to a novel quantitative evaluation. Evaluation methods are briefly described as follows:
β’ Qualitative evaluation: based on the user experience interacting with real objects from the YCB dataset in a photorealistic indoor scenario. Its purpose is to assess interaction realism, immersion, hand movement naturalness and other qualitative aspects described in Table 1 from the Subsection 4.1, which addresses qualitative evaluation in detail.
β’ Quantitative evaluation: based on the grasping quality in terms of realism (i.e. how much it is visually plausible). We consider a visually plausible grasp when hand palm or fingers are level with the object surface, as in a real life grasping. However, when dealing with complex meshes, the collision detection precision can be significantly influenced. In this case, fingers could penetrate the object surface, or remain above its surface when a collision was detected earlier than expected. This would result in an unnatural and unrealistic grasp. To visually quantify grasping quality, we purpose a novel error metric based on computing the distance from each capsule trigger to the nearest contact point on the object surface. Quantitative evaluation and the proposed error metric are addressed in detail in Subsection 4.2.
Qualitative evaluation
Most VR experiments include qualitative and quantitative studies to measure its realism and immersion. Arguably, questionnaires are the default method to qualitatively assess any experience and the vast majority of works include them in one way or another [20] [21] [22]. However, one of the main problems with them is the absence of a standardized set of questions for different experiences that allows for 1. https://github.com/3dperceptionlab/unrealgrasp fair and easy comparisons. The different nature of the VR systems and experiences makes it challenging to find a set of evaluation questions that fits them all. Following the efforts of [23] towards a standardized embodiment questionnaire, we analyzed several works in the literature [24] [25] that included questionnaires to assess VR experiences to devise a standard one for virtual grasping systems. Inspired by such works, we have identified three main types of questions or aspects:
β’ Motor Control: this aspect considers the movement of the virtual hands as a whole and its responsiveness to the virtual reality controllers. Hands should move naturally and their movements must be caused exactly by the controllers without unwanted movements and without limiting or restricting real movements to adapt to the virtual ones.
β’ Finger Movement: this aspect takes the specific finger movement into account. Such movements must be natural and plausibly. Moreover, they must react properly to the user's intent.
β’ Interaction Realism: this aspect is related to the interaction of the hand and fingers with objects.
The questionnaire, shown in Table 1, is composed of fourteen questions related to the previously described aspects. Following [23], the users of the study will be pre- It seemed as if the virtual fingers were mine when grabbing an object Q10 I felt that grabbing objects was clumsy and hard to achieve Q11 It seemed as if finger movement were guided and unnatural Q12 I felt that grasps were visually correct and natural Q13 I felt that grasps were physically correct and natural Q14 It seemed that fingers were adapting properly to the different geometries sented with such questions right after the end of the experience in a randomized order to limit context effects. In addition, questions must be answered following the 7-point Likert-scale: (+3) strongly agree, (+2) agree, (+1) somewhat agree, (0) neutral, (-1) somewhat disagree, (-2) disagree, and (-3) strongly disagree. Results will be presented as a single embodiment score using the following equations:
Motor Control = ((Q1 + Q2) β (Q3 + Q4))/4 Finger Movement Realism = (Q5 + Q6 + Q7)/3 Interaction Realism = ((Q8 + Q9) β (Q10 + Q11) + Q12 + Q13 + Q14)/7(3)
, using the results of each individual aspect, we obtain the total embodiment score as follows: Score = (Motor Control + Finger Movement Realism + Interaction Realism * 2)/4
The interaction realism is the key aspect of this qualitative evaluation. So that, in the Equation 4 we emphasize this aspect by weighting it higher.
Quantitative evaluation
With the quantitative evaluation, we aim to evaluate grasping quality in terms of how much it is visually plausible or realistic. In other words, our purpose is to visually quantify our grasping performance, analyzing each finger position and how it fits the object mesh. When a collision is detected by a capsule trigger, we proceed with the calculation of the nearest distance between the finger phalanx surface (delimited by the capsule trigger) and the object mesh (see Equation 8).
In Figure 4 the red capsules are representing 3D sphere tracing volumes which provide information of the nearest collision from the trace starting point to the first contact point on the object surface which intersects the sphere volume. For each finger phalanx with an attached capsule trigger represented in green, we throw a sphere trace obtaining the nearest contact points on the object surface represented as lime colored dots (impact point, Ip). In this representation, the total error for the index finger would be the average of the sum of the distances in millimeters between the surface of each phalanx and the nearest contact point on the object surface (see Equation 9). The nearest distance computation is approximated by an equation that was developed to find the distance between the impact point, and the plane that contains the capsule trigger center point and is perpendicular to the longitudinal axis of the red capsule. Capsule triggers centers are located on the surface of the hand mesh, so this computation should approximate the nearest distance to the mesh well enough, without being computationally too demanding. To compute this distance, we define the following vectors from the three input points (the starting point of the red capsule, the impact point and the capsule trigger center point):
β β β D Ip = Ip β Sp β ββ β D CT c = CT c β Sp(5)
where β β β D Ip is the vector from the starting point to the impact point, and β ββ β D CT c vector represents the direction of the longitudinal axis of the red capsule. They are represented in navy blue and purple respectively in Figure 4. Then, we find the cosine of the angle they form through their dot product:
β β β D Ip Β· β ββ β D CT c = | β β β D Ip | * | β ββ β D CT c | * cos(Ξ²) cos(Ξ²) = β β β D Ip Β· β ββ β D CT c | β β β D Ip | * | β ββ β D CT c |(6)
We can now substitute that cosine when computing the projection of β β β D Ip over the longitudinal axis of the red capsule ( β β β D P r in Figure 4):
| β β β D P r | = cos(Ξ²) * | β β β D Ip | | β β β D P r | = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c | * | β β β D Ip | * | β β β D Ip | | β β β D P r | = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c |(7)
Having that module, we only have to subtract | β ββ β D CT c | in order to obtain the desired distance:
N D(Ip, Sp, CT c) = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c | β | β ββ β D CT c | N D(Ip, Sp, CT c) = β ββββ β Ip β Sp Β· β ββββββ β CT c β Sp | β ββββββ β CT c β Sp| β | β ββββββ β CT c β Sp|(8)
Computing the distance like this, with this final subtraction, allows to obtain a positive distance when impact point is outside the hand mesh, and a negative one if it is inside. We compute the nearest distance per each capsule trigger attached to a finger phalanx. As stated before, if the distance is negative, this indicates a finger penetration issue on the object surface. Otherwise, if distance is positive, it means that finger stopped above the object surface. The ideal case is when a zero distance is obtained, that is, the finger is perfectly situated on the object surface.
The total error for the hand is represented by the following equation:
HandError = N F ingers i=1 N CT F j=1 |N D(Ip ij , Sp ij , CT c ij )| N CapsuleT riggersP erF inger(9)
Dataset
To benchmark our grasping system we used a set of objects that are frequently used in daily life, such as, food items (e.g. cracker box, cans, box of sugar, fruits, etc.), tool items (e.g. power drill, hammer, screwdrivers, etc.), kitchen items (e.g. eating utensils) and also spherical shaped objects (e.g. tennis ball, racquetball, golf ball, etc.). Yale-CMU-Berkeley (YCB) Object and Model set [1] provides us these reallife 3D textured models scanned with outstanding accuracy and detail. Available objects have a wide variety of shapes, textures and sizes as we can see in Figure 5. The advantage of using real life objects is that the user already has a previous experience manipulating similar objects so he will try to grab and interact with the objects in the same way.
Participants
For the performance analysis, we recruited ten participants (8M/2F) from the local campus. Four of them have experience with VR applications. The rest are inexperienced virtual reality users. Participants will take part on both qualitative and quantitative evaluation. The performance analysis procedure will be described in the following subsection, indicating the concrete tasks to be performed by each participant.
Procedure
The system performance analysis begins with the quantitative evaluation. In this first phase, the user will be embodied in a controlled scenario 2 where 30 different objects will be spawned in a delimited area, with random orientation, and in the same order as represented in Figure 5. The user will try to grasp the object as he would do in real life and as quickly as possible. For each grasping, the system will compute the error metric and will also store the time spent by the user in grasping the object. The purpose of this first phase is to visually analyze grasping quality which is directly related to user expertise in VR environments and concretely with our grasping system. An experienced user would know system limits both when interacting with complex geometries or with large objects that would make it difficult to perform the grasp action quickly and naturally. For the qualitative evaluation, the same user will be embodied in a photorealistic scenario changing mannequin hands by human hand model with realistic textures. After interacting freely in the photorealistic virtual environment 3 , the user will have to answer the evaluation questionnaire defined in Table 1. The main purpose is the evaluation of interaction realism, finger and hand movement naturalness and motor control, among other qualitative aspects regarding user experience in VR environments.
RESULTS AND DISCUSSION
In this section we will discuss and analyze the results obtained from the performance analysis process. On the one hand, we will draw conclusions from the average error obtained in grasping each object by each participant group, and also from the overall error per object taking into account all the participants (see Figure 7). On the other hand, we obtained the average elapsed time needed in grasping each object for each participant group, and also the average elapsed time needed for each object taking into account all the participants (see Figure 6). This will allow us to draw conclusions about the most difficult objects to manipulate in terms of accuracy and elapsed time for grasping. Moreover, we can compare system performance used by inexperienced users in comparison with experienced ones.
Qualitative evaluation
Qualitative evaluation for each participant was calculated using the Equation 3 obtaining a score for each qualitative 2. https://youtu.be/4sPhLbHpywM 3. https://youtu.be/65gdFdwsTVg aspect. In Table 2 we represent for each group of participants: the average score for each evaluation aspect and the total embodiment score computed using the Equation 4. Regarding the represented results in Table 2 of experienced users has been more disadvantageous as they have a more elaborated criterion given their previous experience with virtual reality applications. Finger movement realism (aspect 2) was evaluated similarly by both groups. This is because the hand closing and opening gestures are guided by the same animation in both cases. Finally, the reported results referring to the interaction realism have been the lowest in both cases. This is mostly because users cannot control their individual fingers movement, since general hand gesture is controlled by a unique trigger button of the controller. However, overall obtained embodiment score is 2.08 out of 3.0.
Quantitative evaluation
As expected, inexperienced users have taken longer to grasp almost all the object set due to the lack of practice and expertise with the system. This is clearly represented in Figure 6 where experienced users only have taken longer in grasping some tools such as, the flat screwdriver ( Figure 5z) and hammer (Figure 5aa). Inexperienced users take an average of 0.36 seconds longer to grab the objects. In practice and regarding interaction, this is not a factor that is going to make a crucial difference. Analyzing Figure 6, the tuna fish can (Figure 5f), potted meat can (Figure 5h), spatula (Figure 5u), toy airplane ( Figure 5ad) and bleach cleaner (Figure 5q) are the most time consuming when grasped by the users. This is mainly because of their sizes and complex geometries. Since objects are spawned with a random orientation, this fact can affect grasping times. Even so, we can conclude that the largest objects are those that the user takes the longest to grasp. Regarding Figure 7 we can observe that the errors obtained by both groups of participants are quite similar. Most significant differences were observed in the case of power drill (Figure 5v) and the spatula. The power drill has a complex geometry and its size also hinders its grasp as the same as spatula and toy airplane.
Analyzing the overall error in the Figure 7, we conclude that the largest objects, such as the toy airplane, power drill, and bleach cleaner are those reporting most error rate. In addition, we observe how overall error decreases from the first objects to the last ones. This is mainly because, user skills and expertise with the grasping system are improving progressively. Moreover, results refer to a steep learning curve.
APPLICATIONS
Our grasping system can be applied to several existing problems in different areas of interest, such as: robotics [26], rehabilitation [27] and interaction using augmented reality [28]. In robotics, different works have been explored to implement robust grasp approaches that allow robots to interact with the environment. These contributions are organized in mainly four different blocks [29]: methods that rely on known objects and previously estimated grasp points [30], grasping methods for familiar objects [31], methods for unknown objects based on the analysis of object geometry [32] and automatic learning approaches [33]. Our approach is more closely related to this last block, where its use would potentially be a relevant contribution. As a direct application, our system enables human-robot knowledge transfer where robots try to imitate human behaviour in performing grasping.
Our grasping system is also useful for rehabilitation of patients with hand motor difficulties, and it could even be done remotely assisted by an expert [34], or through an automatic system [35]. Several works have demonstrated the viability of patient rehabilitation in virtual environments [27], helping them to improve the mobility of their hands in daily tasks [36]. Our novel error metric in combination with other automatic learning methods, can be used to guide patients during rehabilitation with feedback information and instructions. This will make rehabilitation a more attractive process, by quantifying the patient progress and visualizing its improvements over the duration of rehabilitation.
Finally, our grasping system integrated in UnrealROX [2] enables many other computer vision and artificial intelligence applications by providing synthetic ground truth data, such as depth and normal maps, object masks, trajectories, stereo pairs, etc. of the virtual human hands interacting with real objects from the YCB dataset ( Figure 8). Hand movement is based on a single animation regardless object geometry. Depending on the object shape we could vary grasping gesture: sphericalgrasping, cylindrical-grasping, finger-pinch, keypinch, etc. However, our grasping gesture was experimentally the best when dealing with different shaped objects.
LIMITATIONS AND FUTURE WORKS
β’
The object can be grasped with only one hand. The user can interact with different objects using both hands at the same time. But not the same object with both hands.
β’ Sometimes it is difficult to deal with large objects due to the initial hand posture or because objects slide out from the hand palm due to physical collisions. Experienced users can better deal with this problem.
As future work, and in order to improve our grasping system, we could vary the hand grip gesture according to the object geometry we are manipulating. This is finding a correspondence between object geometry and a simple shape, e.g. a tennis ball is similar to a sphere thus proceeding with a spherical grasp movement. At the application level, there are several possibilities as we discussed in the previ-ous section. However, we would like to emphasize the use of contact points obtained when grasping an object in virtual reality, to transfer that knowledge and human behavior to real robots.
CONCLUSION
This work proposes a flexible and realistic looking grasping system which enables smooth and real-time interaction in virtual reality environments with arbitrary shaped objects. This approach is unconstrained by the object geometry, it is fully controlled by the user and it is modular and easily implemented on different meshes or skeletal configurations. In order to validate our approach, an exhaustive evaluation process was carried out. Our system was evaluated qualitatively and quantitatively by two groups of participants: with previous experience in virtual reality environments (experienced users) and without expertise in VR (inexperienced). For the quantitative evaluation, a new error metric has been proposed to evaluate each grasp, quantifying hand-object overlapping. From the performance analysis results, we conclude that user overall experience was satisfactory and positive. Analyzing the quantitative evaluation, the error difference between experienced users and non experienced is subtle. Moreover, average errors are progressively smaller as more object are grasped. This clearly indicates a steep learning curve. In addition, the qualitative analysis points to a natural and realistic interaction. Users can freely manipulate previously defined dynamic objects in the photorealistic environment. Moreover, grasping contact points can be easily extracted, thus enabling numerous applications, especially in the field of robotics. Unreal Engine 4 project source code is available at GitHub alongside several video demonstrations. This approach can easily be implemented on different game engines. | 5,795 |
1903.05238 | 2963943458 | Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environments. Resulting grasps are visually realistic because hand is automatically fitted to the object shape from a position and orientation determined by the user using the VR handheld controllers (e.g. Oculus Touch motion controllers). Our approach is flexible because it can be adapted to different hand meshes (e.g. human or robotic hands) and it is also easily customizable. Moreover, it enables interaction with different objects regardless their geometries. In order to validate our proposal, an exhaustive qualitative and quantitative performance analysis has been carried out. On one hand, qualitative evaluation was used in the assessment of abstract aspects, such as motor control, finger movement realism, and interaction realism. On the other hand, for the quantitative evaluation a novel metric has been proposed to visually analyze the performed grips. Performance analysis results indicate that previous experience with our grasping system is not a prerequisite for an enjoyable, natural and intuitive VR interaction experience. | The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) @cite_1 @cite_28 . For the same purpose, @cite_22 studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features. | {
"abstract": [
"In this paper, we build upon recent advances in neuroscience research which have shown that control of the human hand during grasping is dominated by movement in a configuration space of highly reduced dimensionality. We extend this concept to robotic hands and show how a similar dimensionality reduction can be defined for a number of different hand models. This framework can be used to derive planning algorithms that produce stable grasps even for highly complex hand designs. Furthermore, it offers a unified approach for controlling different hands, even if the kinematic structures of the models are significantly different. We illustrate these concepts by building a comprehensive grasp planner that can be used on a large variety of robotic hands under various constraints.",
"Abstract This article reports an experimental study that aimed to quantitatively analyze motion coordination patterns across digits 2β5 (index to little finger), and examine the kinematic synergies during manipulative and gestic acts. Twenty-eight subjects (14 males and 14 females) performed two types of tasks, both right-handed: (1) cylinder-grasping that involved concurrent voluntary flexion of digits 2β5, and (2) voluntary flexion of individual fingers from digit 2 to 5 (i.e., one at a time). A five-camera opto-electronic motion capture system measured trajectories of 21 miniature reflective markers strategically placed on the dorsal surface landmarks of the hand. Joint angular profiles for 12 involved flexionβextension degrees of freedom (DOF's) were derived from the measured coordinates of surface markers. Principal components analysis (PCA) was used to examine the temporal covariation between joint angles. A mathematical modeling procedure, based on hyperbolic tangent functions, characterized the sigmoidal shaped angular profiles with four kinematically meaningful parameters. The PCA results showed that for all the movement trials ( n =280), two principal components accounted for at least 98 of the variance. The angular profiles ( n =2464) were accurately characterized, with the mean (Β±SD) coefficient of determination ( R 2 ) and root-mean-square-error (RMSE) being 0.95 (Β±0.12) and 1.03Β° (Β±0.82Β°), respectively. The resulting parameters which quantified both the spatial and temporal aspects of angular profiles revealed stereotypical patterns including a predominant (87 of all trials) proximal-to-distal flexion sequence and characteristic interdependence β involuntary joint flexion induced by the voluntarily flexed joint. The principal components' weights and the kinematic parameters also exhibited qualitatively similar variation patterns. Motor control interpretations and new insights regarding the underlying synergistic mechanisms, particularly in relation to previous findings on force synergies, are discussed.",
""
],
"cite_N": [
"@cite_28",
"@cite_1",
"@cite_22"
],
"mid": [
"2138983671",
"2066864006",
""
]
} | A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Environments | W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projects are using virtual environments for different purposes. Most of VR applications are related to the entertainment industry (i.e. games and 3D cinema) or architectural visualizations, where virtual scene realism is a cornerstone. Currently existing VR systems are limited by their resolution, field-of-view, frame rate, and interaction among other technical specifications. In order to enhance user VR experience, developers are also focused on implementing rich interactions with the virtual environment, allowing the user to explore, interact and manipulate scene objects as in the real world.
Interaction is a crucial feature for training/simulation applications (e.g. flight, driving and medical simulators), and also teleoperation (e.g. robotics), where the user ability to interact and explore the simulated environments is paramount for achieving an immersive experience. For this purpose, most of VR devices come with a pair of handheld controllers which are fully tracked in 3D space and specifically designed for interaction. One of the most basic interaction tasks is object grasping and manipulation. In order to achieve an enjoyable experience in VR, a realistic, flexible and real-time grasping system is needed. However,
β’ Sergiu Oprea, Pablo M. Gonzalez, Alberto G. Garcia grasp synthesis in manipulation tasks is not straightforward because of the unlimited number of different hand configurations, the variety of object types and their geometries, and also due to the selection of the most suitable grasp for every different object in terms of realism, kinematics and physics. Currently existing real-time approaches in VR are purely animation-driven, completely relying on the animations realism. Moreover, these approaches are constrained to a limited number of simple object geometries and unable to deal with unknown objects. For every different object type and geometry, predefined animations are needed. This fact hinders the user experience, limiting its interaction capabilities. For a complete immersion user should be able to interact and manipulate different virtual objects as in the real world.
In this paper, we propose a real-time grasping system for object interaction in virtual reality environments. We aim to achieve natural and visually plausible interactions in photorealistic environments rendered by Unreal Engine. Taking advantage of headset tracking and motion controllers, a human operator can be embodied in such environments as a virtual human or robot agent to freely navigate and interact with objects. Our grasping system is able to deal with different object geometries, without the need of a predefined grasp animation for each. With our approach, fingers are automatically fitted to object shape and geometry. We constrain hand finger phalanges motion checking in realtime for collisions with the object geometry.
Our grasping system was analyzed both qualitatively and quantitatively. On one side, for the qualitative analysis, grasping system was implemented in a photorealistic envi-arXiv:1903.05238v1 [cs.GR] 12 Mar 2019 ronment where the user is freely able to interact with real world objects extracted from the YCB dataset [1] (see Figure 1). The qualitative evaluation is based on a questionnaire that will address the user interaction experience in terms of realism during object manipulation and interaction, system flexibility and usability, and general VR experience. On the other side, a quantitative grasping system analysis was carried out, contrasting the elapsed time a user needs in grasping an object and grasp quality based on a novel error metric which quantifies the overlapping between hand fingers and grasped object.
From the quantitative evaluation, we obtain individual errors for the last two phalanges of each finger, the time user needed to grasp the object and also the contact points. This information alongside other provided by UnrealROX [2] such as depth mpas, instance segmentations, normal maps, 3D bounding boxes and 6D object pose (see Figure 8), enables different robotic applications as described in Section 6.
In summary, we make the three following contributions:
β’ We propose a real-time, realistic looking and flexible grasping system for natural interaction with arbitrary shaped objects in virtual reality environments;
β’ We propose a novel metric and procedure to analyze visual grasp quality in VR interactions quantifying hand-object overlapping;
β’ We provide the contact points extracted during the interaction in both local and global system coordinates.
The rest of the paper is structured as follows. First of all, Section 2 analyzes the latest works related to object interaction and manipulation in virtual environments. The core of this work is comprised in Section 3 where our approach is described in detail. Then, the performance analysis, with the qualitative and our novel quantitative evaluations, is discussed in Section 4. Analysis results are reported in Section 5. Then, several applications are discussed in Section 6. After that, limitations of our approach are covered in Section 7 alongside several feature works. Finally, some conclusions are drawn in the last Section 8.
Data-driven approaches
Grasping data-driven approaches have existed since a long time ago [3]. These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between different grasp types.
From this database, grasp poses are selected according with given object shape and geometry [6] [7]. Li et al. [6] construct a database with different hand poses and also object shapes and sizes. Despite having a good database, the process of hand poses selection is not straightforward since there can be multiple equally valid possibilities for the same gesture. To address this problem, Li et al. [6] proposed a shape-matching algorithm which returns multiple potential grasp poses.
The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) [8] [9]. For the same purpose, Jorg et al. [10] studied the correlations between hand DOFs aiming to simplify hand models reducing DOF number. The results suggest to simplify hand models by reducing DOFs from 50 to 15 for both hands in conjunction without loosing relevant features.
Hybrid data-driven approaches
In order to achieve realistic object interactions, physical simulations on the objects should also be considered [11] [12] [13]. Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid [14]. Pollard et al. [11] simulate hand interaction, such as two hands grasping each other in the handshake gesture. Bai et al. [13] simulate grasping an object, drop it on a specific spot on the palm and let it roll on the hand palm. A limitation of this approach is that information about the object must be known in advance, which disable robot to interact with unknown objects. By using an initial grasp pose and a desired object trajectory, the algorithm proposed by Liu [15] can generate physically-based hand manipulation poses varying the contact points with the object, grasping forces and also joint configurations. This approach works well for complex manipulations such as twist-opening a bottle. Ye and Liu [14] reconstruct a realistic hand motion and grasping generating feasible contact point trajectories. Selection of valid motions is defined as a randomized depthfirst tree traversal, where nodes are recursively expanded if they are kinematically and dynamically feasible. Otherwise, backtracking is performed in order to explore other possibilities.
Virtual reality approaches
This section is limited to virtual reality interaction using VR motion controllers, avoiding glove-based and bare-hand approaches. Implementation of the aforementioned techniques in virtual reality environments is a difficult task cause optimizations are needed to keep processes running in real time. Most of current existing approaches for flexible and realistic grasping are not suitable for real-time interaction. VR developers aim to create fast solutions with realistic and natural interactions.
Recent approaches are directly related to the entertainment industry, i.e. video games. An excellent example is Lone Echo, a narrative adventure game which consists of manipulating tools and objects for solving puzzles. Hand animations are mostly procedurally generated, enabling grasping of complex geometries regardless their grasp angle. This approach [16] is based on a graph traversal heuristic which searches intersections between hand fingers and object surface mesh triangles. A* heuristic find the intersection that is nearest to the palm and also avoid invalid intersections. After calculating angles to make contact with each intersection point, highest angle is selected and fingers are rotated accordingly.
Mostly implemented solutions in VR are animationbased [17] [18] [19]. These approaches are constrained to a limited number of simple object geometries and are unable to deal with unknown objects. Movements are predefined for concrete object geometries, hindering user interaction capabilities in the virtual environment. In [17], distance grab selection technique is implemented to enhance the user comfort when interacting in small play areas, while sitting or for grabbing objects on the floor. Grasping system is based on three trigger volumes attached to each hand: two small cylinders for short-range grasp, and a cone for long-range grabbing. Based on this approach, we used trigger volumes attached to finger phalanges to control its movement and detect object collisions more precisely. In this way we achieve a more flexible and visually plausible grasping system enhancing immersion and realism during interactions.
GRASPING SYSTEM
With the latest advances in rendering techniques, visualization of virtual reality (VR) environments is increasingly more photorealistic. Besides graphics, which are the cornerstone of most VR solutions, interaction is also an essential part to enhance the user experience and immersion. VR scene content is portrayed in a physically tangible way, inviting users to explore the environment, and interact or manipulate represented objects as in the real world. VR devices aim to provide very congruent means of primary interaction, described as a pair of handheld devices with very accurate 6D one-to-one tracking. The main purpose is to create rich interactions producing memorable and satisfying VR experiences.
Most of the currently available VR solutions and games lack of a robust and natural object manipulation and interaction capabilities. This is because, bringing natural and intuitive interactions to VR is not straightforward, which makes VR development challenging at this stage. Interactions need to be in real-time and maintaining a high and solid frame rate, directly mapping user movement to VR input in order to avoid VR sickness (visual and vestibular mismatch). Maintaining the desired 90 frames per second (FPS) in a photorealistic scene alongside complex interactions is not straightforward. This indicates the need of a flexible grasping system designed to naturally and intuitively manipulate unknown objects of different geometries in real-time.
Overview
Our grasping approach was designed for real-time interaction and manipulation in virtual reality environments by providing a simple, modular, flexible, robust, and visually realistic grasping system. Its main features are described as follows:
β’ Simple and modular: it can be easily integrated with other hand configurations. Its design is modular and adaptable to different hand skeletals and models.
β’ Flexible: most of the currently available VR grasp solutions are purely animation-driven, thus limited to known geometries and unable to deal with previously unseen objects. In contrast, our grasping system is flexible as it allows interaction with unknown objects. In this way, the user can freely decide the object to interact with, without any restrictions.
β’ Robust: unknown objects can have different geometries. However, our approach is able to adapt the virtual hand to objects, regardless of their shape.
β’ Visually realistic: grasping action is fully controlled by the user, taking advantage of its previous experience and knowledge in grasping daily common realistic objects such as cans, cereal boxes, fruits, tools, etc. This makes resulting grasping visually realistic and natural just as a human would in real life.
The combination of the above described features makes VR interaction a pleasant user experience, where object manipulation is smooth and intuitive.
Our grasping works by detecting collisions with objects through the use of trigger actors placed experimentally on the finger phalanges. A trigger actor is a component from Unreal Engine 4 used for casting an event in response to an interaction, e.g. collision with another object. These components can be of different shapes, such as capsule, box, sphere, etc. In the Figure 2 capsule triggers are represented in green and palm sphere trigger in red. We experimentally placed two capsule triggers on the last two phalanges of each finger. We noticed that this configuration is the most effective in detecting objects collisions. Notice that collision detection is performed for each frame, so, for heavy configurations with many triggers, performance would be harmed.
Components
Our grasping system is composed of the components represented in the Figure 3. These components are defined as follows:
β’ Object selection: selects the nearest object to the hand palm. Detection area is determined by the sphere Figure 2). The sphere trigger returns the world location of all the overlapped actors. As a result, the nearest actor can be determined by computing the distance from each overlapped actor to the center of the sphere trigger. Smallest distance will determine the nearest object, saving its reference for the other components.
β’ Interaction manager: manages capsule triggers which are attached to finger phalanges as represented in Figure 2. If a capsule trigger reports an overlap event, the movement of its corresponding phalanx is blocked until hand is reopened or the overlapping with the manipulated object is over. The phalanx state (blocked or in movement) will be used as input to the grasping logic component. A phalanx is blocked if there is an overlap of the its corresponding capsule trigger with the manipulated object.
β’ Finger movement: this component determines the movement of the fingers during the hand closing and opening animations. It ensures a smooth animation avoiding unexpected and unrealistic behavior in finger movement caused neither by a performance drop or other interaction issues. Basically, it monitors each variation in the rotation value of the phalanx. In the case of detecting an unexpected variation (i.e. big variation) during a frame change, missing intermediate values will be interpolated so as to keep finger movement smooth.
β’ Grasping logic: this component manages when to grab or release an object. This decision is made based on the currently blocked phalanges determined with the interaction manager component. The object is grasped or released based on the following function:
f (x) = true, if (th ph β¨ palm) β§ (in ph β¨ mi ph ) f alse, otherwise(1)
, where x = (th ph , in ph , mi ph , palm) is defined as
th ph = thumb mid β¨ thumb dist in ph = index mid β¨ index dist mi ph = middle mid β¨ middle dist(2)
Equation 1 determines when an object is grasped or released based on the inputs determined in Equation 2 where th ph , in ph , and mi ph , are the thumb, index and middle phalanges respectively. According to human hand morphology, mid and dist subscripts refer to the middle and distal phalanx (e.g. thumb dist references the distal phalanx of thumb finger and at the implementation level it is a boolean value).
Implementation details
Grasping system has been originally implemented in Unreal Engine 4 (UE4), however, it can be easily implemented in other engines such as Unity, which would also provide us with the necessary components for replicating the system (e.g. overlapping triggers). The implementation consists of UE4 blueprints and has been correctly structured in the components depicted in Figure 3 and described in the previous section. Implementation is available at Github 1 .
PERFORMANCE ANALYSIS
In order to validate our proposal, a complete performance analysis has been carried out. This analysis covers from a qualitative evaluation, which is prevalent in the assessment of VR systems, to a novel quantitative evaluation. Evaluation methods are briefly described as follows:
β’ Qualitative evaluation: based on the user experience interacting with real objects from the YCB dataset in a photorealistic indoor scenario. Its purpose is to assess interaction realism, immersion, hand movement naturalness and other qualitative aspects described in Table 1 from the Subsection 4.1, which addresses qualitative evaluation in detail.
β’ Quantitative evaluation: based on the grasping quality in terms of realism (i.e. how much it is visually plausible). We consider a visually plausible grasp when hand palm or fingers are level with the object surface, as in a real life grasping. However, when dealing with complex meshes, the collision detection precision can be significantly influenced. In this case, fingers could penetrate the object surface, or remain above its surface when a collision was detected earlier than expected. This would result in an unnatural and unrealistic grasp. To visually quantify grasping quality, we purpose a novel error metric based on computing the distance from each capsule trigger to the nearest contact point on the object surface. Quantitative evaluation and the proposed error metric are addressed in detail in Subsection 4.2.
Qualitative evaluation
Most VR experiments include qualitative and quantitative studies to measure its realism and immersion. Arguably, questionnaires are the default method to qualitatively assess any experience and the vast majority of works include them in one way or another [20] [21] [22]. However, one of the main problems with them is the absence of a standardized set of questions for different experiences that allows for 1. https://github.com/3dperceptionlab/unrealgrasp fair and easy comparisons. The different nature of the VR systems and experiences makes it challenging to find a set of evaluation questions that fits them all. Following the efforts of [23] towards a standardized embodiment questionnaire, we analyzed several works in the literature [24] [25] that included questionnaires to assess VR experiences to devise a standard one for virtual grasping systems. Inspired by such works, we have identified three main types of questions or aspects:
β’ Motor Control: this aspect considers the movement of the virtual hands as a whole and its responsiveness to the virtual reality controllers. Hands should move naturally and their movements must be caused exactly by the controllers without unwanted movements and without limiting or restricting real movements to adapt to the virtual ones.
β’ Finger Movement: this aspect takes the specific finger movement into account. Such movements must be natural and plausibly. Moreover, they must react properly to the user's intent.
β’ Interaction Realism: this aspect is related to the interaction of the hand and fingers with objects.
The questionnaire, shown in Table 1, is composed of fourteen questions related to the previously described aspects. Following [23], the users of the study will be pre- It seemed as if the virtual fingers were mine when grabbing an object Q10 I felt that grabbing objects was clumsy and hard to achieve Q11 It seemed as if finger movement were guided and unnatural Q12 I felt that grasps were visually correct and natural Q13 I felt that grasps were physically correct and natural Q14 It seemed that fingers were adapting properly to the different geometries sented with such questions right after the end of the experience in a randomized order to limit context effects. In addition, questions must be answered following the 7-point Likert-scale: (+3) strongly agree, (+2) agree, (+1) somewhat agree, (0) neutral, (-1) somewhat disagree, (-2) disagree, and (-3) strongly disagree. Results will be presented as a single embodiment score using the following equations:
Motor Control = ((Q1 + Q2) β (Q3 + Q4))/4 Finger Movement Realism = (Q5 + Q6 + Q7)/3 Interaction Realism = ((Q8 + Q9) β (Q10 + Q11) + Q12 + Q13 + Q14)/7(3)
, using the results of each individual aspect, we obtain the total embodiment score as follows: Score = (Motor Control + Finger Movement Realism + Interaction Realism * 2)/4
The interaction realism is the key aspect of this qualitative evaluation. So that, in the Equation 4 we emphasize this aspect by weighting it higher.
Quantitative evaluation
With the quantitative evaluation, we aim to evaluate grasping quality in terms of how much it is visually plausible or realistic. In other words, our purpose is to visually quantify our grasping performance, analyzing each finger position and how it fits the object mesh. When a collision is detected by a capsule trigger, we proceed with the calculation of the nearest distance between the finger phalanx surface (delimited by the capsule trigger) and the object mesh (see Equation 8).
In Figure 4 the red capsules are representing 3D sphere tracing volumes which provide information of the nearest collision from the trace starting point to the first contact point on the object surface which intersects the sphere volume. For each finger phalanx with an attached capsule trigger represented in green, we throw a sphere trace obtaining the nearest contact points on the object surface represented as lime colored dots (impact point, Ip). In this representation, the total error for the index finger would be the average of the sum of the distances in millimeters between the surface of each phalanx and the nearest contact point on the object surface (see Equation 9). The nearest distance computation is approximated by an equation that was developed to find the distance between the impact point, and the plane that contains the capsule trigger center point and is perpendicular to the longitudinal axis of the red capsule. Capsule triggers centers are located on the surface of the hand mesh, so this computation should approximate the nearest distance to the mesh well enough, without being computationally too demanding. To compute this distance, we define the following vectors from the three input points (the starting point of the red capsule, the impact point and the capsule trigger center point):
β β β D Ip = Ip β Sp β ββ β D CT c = CT c β Sp(5)
where β β β D Ip is the vector from the starting point to the impact point, and β ββ β D CT c vector represents the direction of the longitudinal axis of the red capsule. They are represented in navy blue and purple respectively in Figure 4. Then, we find the cosine of the angle they form through their dot product:
β β β D Ip Β· β ββ β D CT c = | β β β D Ip | * | β ββ β D CT c | * cos(Ξ²) cos(Ξ²) = β β β D Ip Β· β ββ β D CT c | β β β D Ip | * | β ββ β D CT c |(6)
We can now substitute that cosine when computing the projection of β β β D Ip over the longitudinal axis of the red capsule ( β β β D P r in Figure 4):
| β β β D P r | = cos(Ξ²) * | β β β D Ip | | β β β D P r | = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c | * | β β β D Ip | * | β β β D Ip | | β β β D P r | = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c |(7)
Having that module, we only have to subtract | β ββ β D CT c | in order to obtain the desired distance:
N D(Ip, Sp, CT c) = β β β D Ip Β· β ββ β D CT c | β ββ β D CT c | β | β ββ β D CT c | N D(Ip, Sp, CT c) = β ββββ β Ip β Sp Β· β ββββββ β CT c β Sp | β ββββββ β CT c β Sp| β | β ββββββ β CT c β Sp|(8)
Computing the distance like this, with this final subtraction, allows to obtain a positive distance when impact point is outside the hand mesh, and a negative one if it is inside. We compute the nearest distance per each capsule trigger attached to a finger phalanx. As stated before, if the distance is negative, this indicates a finger penetration issue on the object surface. Otherwise, if distance is positive, it means that finger stopped above the object surface. The ideal case is when a zero distance is obtained, that is, the finger is perfectly situated on the object surface.
The total error for the hand is represented by the following equation:
HandError = N F ingers i=1 N CT F j=1 |N D(Ip ij , Sp ij , CT c ij )| N CapsuleT riggersP erF inger(9)
Dataset
To benchmark our grasping system we used a set of objects that are frequently used in daily life, such as, food items (e.g. cracker box, cans, box of sugar, fruits, etc.), tool items (e.g. power drill, hammer, screwdrivers, etc.), kitchen items (e.g. eating utensils) and also spherical shaped objects (e.g. tennis ball, racquetball, golf ball, etc.). Yale-CMU-Berkeley (YCB) Object and Model set [1] provides us these reallife 3D textured models scanned with outstanding accuracy and detail. Available objects have a wide variety of shapes, textures and sizes as we can see in Figure 5. The advantage of using real life objects is that the user already has a previous experience manipulating similar objects so he will try to grab and interact with the objects in the same way.
Participants
For the performance analysis, we recruited ten participants (8M/2F) from the local campus. Four of them have experience with VR applications. The rest are inexperienced virtual reality users. Participants will take part on both qualitative and quantitative evaluation. The performance analysis procedure will be described in the following subsection, indicating the concrete tasks to be performed by each participant.
Procedure
The system performance analysis begins with the quantitative evaluation. In this first phase, the user will be embodied in a controlled scenario 2 where 30 different objects will be spawned in a delimited area, with random orientation, and in the same order as represented in Figure 5. The user will try to grasp the object as he would do in real life and as quickly as possible. For each grasping, the system will compute the error metric and will also store the time spent by the user in grasping the object. The purpose of this first phase is to visually analyze grasping quality which is directly related to user expertise in VR environments and concretely with our grasping system. An experienced user would know system limits both when interacting with complex geometries or with large objects that would make it difficult to perform the grasp action quickly and naturally. For the qualitative evaluation, the same user will be embodied in a photorealistic scenario changing mannequin hands by human hand model with realistic textures. After interacting freely in the photorealistic virtual environment 3 , the user will have to answer the evaluation questionnaire defined in Table 1. The main purpose is the evaluation of interaction realism, finger and hand movement naturalness and motor control, among other qualitative aspects regarding user experience in VR environments.
RESULTS AND DISCUSSION
In this section we will discuss and analyze the results obtained from the performance analysis process. On the one hand, we will draw conclusions from the average error obtained in grasping each object by each participant group, and also from the overall error per object taking into account all the participants (see Figure 7). On the other hand, we obtained the average elapsed time needed in grasping each object for each participant group, and also the average elapsed time needed for each object taking into account all the participants (see Figure 6). This will allow us to draw conclusions about the most difficult objects to manipulate in terms of accuracy and elapsed time for grasping. Moreover, we can compare system performance used by inexperienced users in comparison with experienced ones.
Qualitative evaluation
Qualitative evaluation for each participant was calculated using the Equation 3 obtaining a score for each qualitative 2. https://youtu.be/4sPhLbHpywM 3. https://youtu.be/65gdFdwsTVg aspect. In Table 2 we represent for each group of participants: the average score for each evaluation aspect and the total embodiment score computed using the Equation 4. Regarding the represented results in Table 2 of experienced users has been more disadvantageous as they have a more elaborated criterion given their previous experience with virtual reality applications. Finger movement realism (aspect 2) was evaluated similarly by both groups. This is because the hand closing and opening gestures are guided by the same animation in both cases. Finally, the reported results referring to the interaction realism have been the lowest in both cases. This is mostly because users cannot control their individual fingers movement, since general hand gesture is controlled by a unique trigger button of the controller. However, overall obtained embodiment score is 2.08 out of 3.0.
Quantitative evaluation
As expected, inexperienced users have taken longer to grasp almost all the object set due to the lack of practice and expertise with the system. This is clearly represented in Figure 6 where experienced users only have taken longer in grasping some tools such as, the flat screwdriver ( Figure 5z) and hammer (Figure 5aa). Inexperienced users take an average of 0.36 seconds longer to grab the objects. In practice and regarding interaction, this is not a factor that is going to make a crucial difference. Analyzing Figure 6, the tuna fish can (Figure 5f), potted meat can (Figure 5h), spatula (Figure 5u), toy airplane ( Figure 5ad) and bleach cleaner (Figure 5q) are the most time consuming when grasped by the users. This is mainly because of their sizes and complex geometries. Since objects are spawned with a random orientation, this fact can affect grasping times. Even so, we can conclude that the largest objects are those that the user takes the longest to grasp. Regarding Figure 7 we can observe that the errors obtained by both groups of participants are quite similar. Most significant differences were observed in the case of power drill (Figure 5v) and the spatula. The power drill has a complex geometry and its size also hinders its grasp as the same as spatula and toy airplane.
Analyzing the overall error in the Figure 7, we conclude that the largest objects, such as the toy airplane, power drill, and bleach cleaner are those reporting most error rate. In addition, we observe how overall error decreases from the first objects to the last ones. This is mainly because, user skills and expertise with the grasping system are improving progressively. Moreover, results refer to a steep learning curve.
APPLICATIONS
Our grasping system can be applied to several existing problems in different areas of interest, such as: robotics [26], rehabilitation [27] and interaction using augmented reality [28]. In robotics, different works have been explored to implement robust grasp approaches that allow robots to interact with the environment. These contributions are organized in mainly four different blocks [29]: methods that rely on known objects and previously estimated grasp points [30], grasping methods for familiar objects [31], methods for unknown objects based on the analysis of object geometry [32] and automatic learning approaches [33]. Our approach is more closely related to this last block, where its use would potentially be a relevant contribution. As a direct application, our system enables human-robot knowledge transfer where robots try to imitate human behaviour in performing grasping.
Our grasping system is also useful for rehabilitation of patients with hand motor difficulties, and it could even be done remotely assisted by an expert [34], or through an automatic system [35]. Several works have demonstrated the viability of patient rehabilitation in virtual environments [27], helping them to improve the mobility of their hands in daily tasks [36]. Our novel error metric in combination with other automatic learning methods, can be used to guide patients during rehabilitation with feedback information and instructions. This will make rehabilitation a more attractive process, by quantifying the patient progress and visualizing its improvements over the duration of rehabilitation.
Finally, our grasping system integrated in UnrealROX [2] enables many other computer vision and artificial intelligence applications by providing synthetic ground truth data, such as depth and normal maps, object masks, trajectories, stereo pairs, etc. of the virtual human hands interacting with real objects from the YCB dataset ( Figure 8). Hand movement is based on a single animation regardless object geometry. Depending on the object shape we could vary grasping gesture: sphericalgrasping, cylindrical-grasping, finger-pinch, keypinch, etc. However, our grasping gesture was experimentally the best when dealing with different shaped objects.
LIMITATIONS AND FUTURE WORKS
β’
The object can be grasped with only one hand. The user can interact with different objects using both hands at the same time. But not the same object with both hands.
β’ Sometimes it is difficult to deal with large objects due to the initial hand posture or because objects slide out from the hand palm due to physical collisions. Experienced users can better deal with this problem.
As future work, and in order to improve our grasping system, we could vary the hand grip gesture according to the object geometry we are manipulating. This is finding a correspondence between object geometry and a simple shape, e.g. a tennis ball is similar to a sphere thus proceeding with a spherical grasp movement. At the application level, there are several possibilities as we discussed in the previ-ous section. However, we would like to emphasize the use of contact points obtained when grasping an object in virtual reality, to transfer that knowledge and human behavior to real robots.
CONCLUSION
This work proposes a flexible and realistic looking grasping system which enables smooth and real-time interaction in virtual reality environments with arbitrary shaped objects. This approach is unconstrained by the object geometry, it is fully controlled by the user and it is modular and easily implemented on different meshes or skeletal configurations. In order to validate our approach, an exhaustive evaluation process was carried out. Our system was evaluated qualitatively and quantitatively by two groups of participants: with previous experience in virtual reality environments (experienced users) and without expertise in VR (inexperienced). For the quantitative evaluation, a new error metric has been proposed to evaluate each grasp, quantifying hand-object overlapping. From the performance analysis results, we conclude that user overall experience was satisfactory and positive. Analyzing the quantitative evaluation, the error difference between experienced users and non experienced is subtle. Moreover, average errors are progressively smaller as more object are grasped. This clearly indicates a steep learning curve. In addition, the qualitative analysis points to a natural and realistic interaction. Users can freely manipulate previously defined dynamic objects in the photorealistic environment. Moreover, grasping contact points can be easily extracted, thus enabling numerous applications, especially in the field of robotics. Unreal Engine 4 project source code is available at GitHub alongside several video demonstrations. This approach can easily be implemented on different game engines. | 5,795 |
1903.05238 | 2963943458 | "Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual o(...TRUNCATED) | "In order to achieve realistic object interactions, physical simulations on the objects should also (...TRUNCATED) | {"abstract":["","Animated human characters in everyday scenarios must interact with the environment (...TRUNCATED) | "A Visually Plausible Grasping System for Object Manipulation and Interaction in Virtual Reality Env(...TRUNCATED) | "W ITH the advent of affordable VR headsets such as Oculus VR/Go and HTC Vive, many works and projec(...TRUNCATED) | 5,795 |
cmp-lg9804001 | 1742257591 | "Graph Interpolation Grammars are a declarative formalism with an operational semantics. Their goal (...TRUNCATED) | "Graph interpolation can be viewed as an extension of tree adjunction to parse graphs. And, indeed, (...TRUNCATED) | {"abstract":["In this paper, a tree generating system called a tree adjunct grammar is described and(...TRUNCATED) | 0 |
||
cmp-lg9804001 | 1742257591 | "Graph Interpolation Grammars are a declarative formalism with an operational semantics. Their goal (...TRUNCATED) | "In Lexical Functional Grammars @cite_4 , grammatical functions are loosely coupled with phrase stru(...TRUNCATED) | {"abstract":["The editor of this volume, who is also author or coauthor of five of the contributions(...TRUNCATED) | 0 |
||
cmp-lg9709004 | 1575569168 | "Automatic text categorization is a complex and useful task for many natural language processing app(...TRUNCATED) | "To our knowledge, lexical databases have been used only once in TC. Hearst @cite_10 adapted a disam(...TRUNCATED) | {"abstract":["This dissertation investigates the role of contextual information in the automated ret(...TRUNCATED) | 0 |
||
cmp-lg9709004 | 1575569168 | "Automatic text categorization is a complex and useful task for many natural language processing app(...TRUNCATED) | "Lexical databases have been employed recently in word sense disambiguation. For example, Agirre and(...TRUNCATED) | {"abstract":["In this paper, we present a new approach for word sense disambiguation (WSD) using an (...TRUNCATED) | 0 |
||
cmp-lg9706008 | 2951421399 | "This paper describes an experimental comparison of three unsupervised learning algorithms that dist(...TRUNCATED) | "Word--sense disambiguation has more commonly been cast as a problem in supervised learning (e.g., @(...TRUNCATED) | {"abstract":["The Naive Mix is a new supervised learning algorithm that is based on a sequential met(...TRUNCATED) | Distinguishing Word Senses in Untagged Text | 0 |
End of preview. Expand
in Data Studio
- Downloads last month
- 12