Monthly Archives: April 2024

Food & GST: one man’s meat …

Common sense and common experience; one man’s meat is another man’s poison.

The Classification of Food Products for GST purposes

The Tax Office has published a draft determination GSTD 2024/D1 which aims to clarify how GST applies to food which is marketed as a prepared meal.

The ruling implements a definition of ‘prepared meal’ drawn from a recent Federal Court decision which determined that certain frozen foods were not GST free. (See Simplot Australia Pty Limited v Commissioner of Taxation).

In Simplot, the Federal Court considered the GST treatment of a range of frozen food products supplied or imported by Simplot Australia Pty Limited. The products each contained a mix of vegetables along with spices or seasonings, and some also included grains. Some products were labelled as ‘sides’ or provided express or implied serving suggestions, including through pictures that displayed the products served with added protein (for example, chicken or pork).

The Court found that all of the products were food of a kind marketed as a prepared meal (other than soup) and therefore they were not GST-free.

In an interesting attempt to shed some light or a moonless subject, the Commissioner has an almost impossible task. While understanding the basis for the comment, I am not sure that the last sentence in paragraph 31 adds to our understanding:

“Where a product is marketed as a prepared meal, it would be rare that the product would not be food of a kind marketed as a prepared meal.” Riddle me that one.

Having said that, I am not sure that the Draft Determination has advanced the cause for taxpayers when the basis for deciding if food is GST-free or not comes down to “a product will be food of a kind marketed as a prepared meal if it is the kind of food that, as a matter of common sense and common experience, is marketed as a prepared meal.

Common sense and common experience – one man’s meat is another man’s poison.

So, the Draft Determination sets out the guidelines for determining whether a product is marketed as a prepared meal. Those guidelines, relying on the decision in the Simplot case, require the manufacturers of food the determine if, as a matter of common sense and common experience, the food is marketed as a prepared meal.

The problem is that one person’s meal (taxable) could be another person’s side dish (GST-free).

So I road tested the guidelines with my family.

What about Birds Eye Steam Fresh In Cheese Sauce Broccoli, Cauliflower & Carrot? Definitely not something that any of us would eat as a meal in itself, so GST-free?? But we have very good friends who are vegetarians who would eat it as a meal in itself, so subject to GST?? One person’s side dish is another person’s main meal.

The proverb: what’s one man’s meat is another man’s poison means that things liked or enjoyed by one person may be distasteful to another.

The Roman poet and philosopher Lucretius (circa 94-circa 55 BC) had expressed the very same idea in De Rerum Natura (On the Nature of Things):

Ut quod ali cibus est aliis fuat acre venenum.

That which to some is food, to others is rank poison.

I am sorry to report that I am no closer to resolving the issue.

Can AI solve the employee contractor classification issue?

In a recent article in Australian Tax Forum, Gordon Brysland Assistant Commissioner, Tax Counsel Network, Australian Taxation Office, provided a very detailed and erudite article about the application of AI tools to tax classification issues.

While the article was looking specifically at food classification issues for GST purposes (another blog for another time), it raised some interesting aspects of the current use of AI in tax matters around the world.

The article noted that closed-end legal classification problems are the focus of Professor Ben Alarie and colleagues at the University of Toronto. This team is at the frontier of machine learning research in the tax classification and prediction spheres. Closed-end legal classification refers to a system where items are classified into predefined categories, and each item can only belong to one of these categories.

The employee/contractor dichotomy is currently one of those closed-end legal classification issues. For a number of tax purposes, the worker must be categorized as either an employee or a contractor.

Out of the University of Toronto work came Blue J Legal, which operates as an international start-up. It has developed a wide range of tax classifier tools on the back of enhanced algorithms.  

One Blue J Legal case study seeks to resolve whether a particular massage therapist is an employee or an independent contractor. The Canadian situation is a “classic grey area” where there is no bright line, no single dispositive fact and courts apply a “total relationship” test. Australia has now seemingly moved away from this test with recent High Court cases, but the underlying factors are still relevant. The dataset for the model includes over 600 decisions over a 20-year period. For each of these, the researchers isolated 21 discrete variables or factors. The article explains what happens next:

But how do courts weigh these variables to arrive at their decision? How do the different variables interact with each other? To put it simply, we do not know. We don’t use a simple formula. That is, we don’t simply count the factors that favour one classification and weigh them against the factors favouring the other. Nor do we use standard regression techniques that require us to set the structure of the relationship between all the variables and the predicted outcome.

Instead, we let the computer find the right answer. We use machine learning technology to figure out the best way to assign weights to each of our variables and to figure out how the different variables interact with each other. This task is practically impossible for a human. Neural networks find hidden connections between the variables that we, as empirical modelers, do not specify and – probably – could not have identified even with unlimited time and resources using conventional approaches to legal research.

Blue J (the Toronto system’s judicial alter ego) found the “right answer” in over 90% of cases. Filling in the 21-feature survey for the massage therapist took five minutes. Blue J compared these facts to all other decisions in the database and was 87.1% confident the therapist was an employee. Blue J provided its own “reasons”:

Although the worker and hirer intended to characterize this relationship as an independent contractor relationship, there are a number of other factors that outweigh the intention in this case.

The article describes various methods for improving the efficiency of the model. One of them, based on a “wisdom of crowds” idea (one of what Brysland calls Aristotle’s children), involves having multiple operators training the system. The basic idea is that, in a binary situation, the more participants, the more the randomised errors of individual coding perceptions and practice will be cancelled out. The average coding judgment of all operators will likely be more accurate than that of the wisest coder alone. Blue J is said to provide a “mechanism through which private decisions and judgments can be turned into one collective answer”. The researchers say that “Blue J continues to learn and gets better and better over time”.

For someone who has grappled with (and continues to grapple with) the exercise of determining the category of worker, I would love to have 87.1% certainty.

It seems that, like the introduction of computers all those years ago, if we can get the right information in, we can get a reliable output for these classification issues. But the use of algorithms by government departments like the ATO can be confronting.

French CJ in the UNSW Law Journal said:

… the use of automation points us to a dystopian future of inscrutable exercises of statutory power by artificial intelligences deep learning their skills through the discernment of patterns in big data and developing unreadable algorithms to apply that pattern learning to individual cases.

Other aspects of the rule-of-law also condition the potential use of algorithmic classifiers in public decision-making. The fact that decisions must be transparent and that some human be legally accountable for them raises the need for reasons being given to those affected.

For those with time and the interest, I thoroughly recommend Gordon Brysland’s article to you. It is a glimpse into the not-to-distant future of tax consulting.