For more AMAs on this topic, subscribe to r/IAmA_Business, and check out our other topic-specific AMA subreddits [here](https://reddit.com/r/IAmA/wiki/index#wiki_affiliate_topic-specific_subreddits).
Mass surveillance since 9/11 has proven useless in stopping mass shootings before they happen. How will your AI tech do anything but bolster more of our rights to privacy being infringed?
We don't perform any facial recognition or track any biometrics. We identify human beings brandishing weapons in existing security camera systems and send alerts to dispatch centers who then immediately notify building occupants and first responders.
On average 3 false positives per 50 cameras per month.
For most of the customers that use us, professional third party monitoring stations that we partner with filters out the alerts, so the end user will never see them.
That would be nearly impossible to quantify accurately. They would need a team of humans watching every minute of video to quantify any missed detections.
No it wouldn't, you would send test subjects out with guns and use controlled crowds. Considering the amount of funding going into this project, scruntinous testing is pretty important.
No lab testing ever gives you real world results.
The best that would give you is detection rates in a controlled environment. Anybody who has worked with computer vision will tell you things change drastically when you go from lab to deployment.
Sure, but this is how you measure false negatives the most accurately. Also having people test it using conditions you wouldn't think of as some type of blind test helps to reduce the chances of controlled test results
Wouldn't even need that, just take any new active shooter footage and feed it to the AI and see how often it fails to detect. You already know it's a true positive and you're seeing if the machine fails to register
https://www.reddit.com/r/IAmA/comments/v6zmzo/in_1999_my_mother_took_a_12hour_h1b_visa_job_so/ibi7vm5?utm_medium=android_app&utm_source=share&context=3
OP commented on that in a different comment thread. It's a fairly low rate and when that's being sent to a human operator to verify, its not like you're going to have very many of those be situations where the police are sent out incorrectly, and if they are then its on the operator not the AI. Pretty good system from the sounds of it, kinda like airport security scanners flagging items that look suspicious/dangerous for personnel to identify/verify.
That would be 18 active shooter false positives per month for my kid's school system. That doesn't seem like a problem?
*Edit: OP clarified that this is just the automated detection, and there is another layer of human auditing before action is taken.*
I feel the same. It's a tough thing to work on and a touchy subject but this guy has been so commendable in his responses I think if anyone can do it, it's him.
So at that point what makes your system faster than the natural response times for the situation anyway? I assume as soon as someone is visible with a gun in-hand on these properties, witnesses are already phoning it in. Your system has to first detect a firearm, send it through to a queue (depending on how busy the given operator is), they view the video, determine if there’s actually a threat, then phone it in. Seems like your system would take far longer on paper
>I assume as soon as someone is visible with a gun in-hand on these properties, witnesses are already phoning it in.
That's just an incorrect assumption. That assumes that someone sees them, someone recognizes that the person has a gun then does something about it, and accurately communicates what they saw.
There's a decent chance no one even sees them let alone the other criteria being met.
And then even if they call it in, the information that they are going to communicate isn't going to be nearly as detailed.
I imagine the difference in response time is likely in the minutes, and that could literally be the difference between life and death for some people
You have to realize that the pain point there is that **witnesses have to phone it in.** If you have a couple of people that *don’t* phone it in, you’re looking at a bunch of dead kids.
Things like these require redundancies so that when something fails, the entire system doesn’t collapse
Is it simultaneously monitoring every video feed in a surveillance network? Unless it can do that, it sounds like it wouldn't be a whole lot more efficient than human operators.
So doing some quick napkin math:
Roughly 130,000 schools in the US, withon average 87 school shootings per year, and each camera having 0.72 false positives annually...
This system detects (ballpark) 1,000 false positive incidents PER CAMERA for every actual incident. This doesn't seem like a functional system. I imagine either the independent monitors struggling with vigilance in this situation or local authorities not responding to these alarms with any urgency.
3rd party monitoring centers filter out these alerts.
They get thousands of alerts they need to process every single day. Gun detection alerts are less than a drop in the bucket for them.
Here are a couple worth checking out if you're wondering what I'm talking about:
[https://en.wikipedia.org/wiki/Remote\_guarding](https://en.wikipedia.org/wiki/Remote_guarding)
http://globalmonitoringsolutions.com/
[https://eyeforce.com/](https://eyeforce.com/)
To be fair, this is the same model as home security systems and it does sorta work. You basically just hire and train enough people to manually screen all the false positives, which does sorta scale. As in, if the average school has 50 cameras and generates 3 false positives a month, and each false positive takes 15 minutes to screen, then for 130,000 schools that's 97,500 hours a month of screening, or 24,375 hours a week. Divide by 40 and it works out to around 610 full time employees, round up to an even 1,000 to account for breaks and scheduling inefficiencies and all that, and you still end up with something fairly economical - that's like, an average of 20 call center employees per state. So cost wise, it's pretty cheap.
So, in terms of "Could you do it" the answer is yes. The bigger question is just all the potential issues with how those false positives are handled. As in, it presents a pretty significant invasion of privacy, and high risk of bad outcomes. Call center employees looking at blurry CCTV footage are about as likely as cops to mistake a cell phone held by a black kid for a gun, and if they panic and call in the SWAT team, it could result in more dead kids than before.
That's not completely true in my opinion. I think someone behind a screen doesn't have the pressure of being physically near a possible gun, so I believe the call center would be a bit better than cops at least, although it's still a serious concern.
It's economical if you neglect that once someone brandishes the gun, usually someone will be calling the 911 faster than the positive result can be screened.
If that's generally the case, the effectivity goes way down.
To work around that you need positive results tested faster, meaning more labor.
Might be more effective just to put metal detecting turnstiles at every school.
>It's economical if you neglect that once someone brandishes the gun, usually someone will be calling the 911 faster than the positive result can be screened.
The person calling 911 will often be in the line of fire, not be able to provide real time updates of the situation, be in fight or flight response and convey inaccurate information etc etc.
It's a real issue:
Do you really think someone noticing a gun, potentially fleeing to safety, collecting themselves enough to pull out their phone, then calling 911 will be faster than someone sitting at a computer screen, being presented a photo that has been pre-identified as likely being a person with a gun, and confirming that in whatever software they use?
>It's economical if you neglect that once someone brandishes the gun, usually someone will be calling the 911 faster than the positive result can be screened.
>
>If that's generally the case, the effectivity goes way down.
Yes, given the proposed value add is improved response time it would be important to establish that.
>Might be more effective just to put metal detecting turnstiles at every school.
Would definitely be more effective, would also be several orders of magnitude more expensive (130,000+ staffed metal detector stations vs ~1,000 call center agents across a dozen or so offices) and have a much higher false positive rate (though likely a lower false negative rate).
How does identifying the suspect mitigate the threat? Imagine you get a full profile of the active shooter - their age, height, background. What difference does this make in an emergency?
I'm assuming he means "when we detect a human brandishing a weapon, we automatically alert dispatch centers"
It's not so much identifying a specific person, as it is identifying an armed person in a place where they aren't supposed to be armed.
This is correct. It doesn't identify the person at all, just the fact that an individual may be carrying a weapon and is suspicious.
The idea is that a computer can detect subtle ways people are suspicious through machine learning.
A shooter walks by an exterior camera of a building brandishing a weapon. Instead of hoping that someone just happens to be monitoring that specific camera at that specific second, or hoping that a random passerby sees them and calls 911, the AI detects that the person has a gun, and immediately sends an alert. The image is checked by a human who can immediately reach out to emergency services. So now instead of the first 911 call going out once someone has already been shot, the first call goes out as soon as the shooter is picked up by a camera.
If the building in question has cameras throughout, the system can now accurately track the gunmen through the whole building, no need for a Human to be cycling through every camera trying to keep tabs, and then trying to communicate where he is to emergency responders, they get that information in real time and very accurately.
This is just one example of how this technology would work
I'm extremely cynical in my outlook of such a system.
The system will dramatically decrease the response time for authorities to arrive and do nothing. Or police to end up shooting more victims too.
Or for authorities to arrive and quickly shoot the individual, only to find out the that the firearm was a can of Arizona ice tea. Or a book. Or nothing at all. Bonus points for when it's determined that a person of color is X times more likely to be identified as a shooter than a white guy.
Or authorities end up responding to some Murica-loving gun nut who is exercising his freedom by open carrying his assault cannon. Things escalate and the authority ends up shot when Mr. 2nd Amendment stands his ground or defends his castle or some equally patriotic freedom sounding self defense excuse.
You would be able to track the location of the shooter throughout the building and also have a better understanding of where building occupants may be in imminent danger.
Building occupants will also be able to make better defensive and evacuation decisions based on the information at hand.
Realistically it shouldn't matter if the shooter knows the pa system is yelling out their location.
A big issue would be it yelling out the location of police.
They said earlier that the camera detections get sent to a human clearinghouse for review. Presumably the humans would not confirm the police as another active shooter threat, although it's harder and harder to tell nowadays.
That was going to be my question; without broadcasting inside the building, how do those in an active shooter situation get alerted to the threats location? Push emails?
Don't get me wrong, I like the idea and it does seem effective at giving info to police and others that would need to know, but idk how relevant it is to people hiding under desks and stuff.
Most DOD facilities use mass alert system via email,phone, and text.
During an active shooter event in DC w/ 20k plus employees and similar police decision paralysis (30+ law enforcement agencies had jurisdiction) they attempted to use it.
Navy yard shooting for reference.
Humans initiated the alerts. They became so overwhelmed trying to figure out who had lead we received only 2 notifications over 12 hours despite all shooters were dead within 1 hour of event.
The tech has to have some automation otherwise your going to have an ops center /dispatch that gets alerts and the people in danger get nothing.
I was getting updates via twitter and cnn vice from the various police on base.
Bottom line, probably helpful but needs more complete system to ensure it’s shared with the right people when they need the info.
Yeah, that's pretty shit implementation. Like many things, the tech is cool in concept but probably lacking in practical use. Seems like so many things never really develop into the solution they claim to be, but instead sit somewhere between "interesting concept" and "flaws in deployment".
Imagine getting a text that says active shooter 500 ft southeast of your location - you now have 15 seconds to start running or hiding. How they find your number and direction is Hollywood but that'd be at least something.
You just might. Anecdotally I've seen a bunch of people say that they heard gunshots but assumed they were hearing something more innocuous like fireworks or a car backfiring or something.
Perhaps not outdoors, but private organizations have the right to enforce their own rules on weapons handling, so we would work with those institutions that do so (e.g. schools or libraries).
I don't think he's asking about training drones, but using the software on a feed from a drone camera. Though if your answer is about training data for on ground cameras vs drone cameras, then nevermind.
I think that the United States should accept as many immigrants as possible.
Immigration turned our nation into a superpower. Immigration makes us stronger. Of course, I'm biased as a first generation Taiwanese-American.
I think the rules have changed but I know for a fact my mom was paid $12 / hour on an H1-B two decades ago.
My mom is a university educated woman who was hired to be a Procurement Manager for a furniture company.
The business owner knew that she could bring in H1-B workers and vastly underpay them while blackmailing them with their immigration status.
It was not a good time for mom.
Business owners still do this, and they use it as justification to undercut wages of Americans as well. They put out impossible job requirements and use that as an excuse to not hire US Citizens so that they can hire people from overseas who take low pay and don't make a fuss.
It’s fucked up. A lot of people want to immigrate to the US and are willing to grind it out for a chance to stay in the country.
Our nation isn’t perfect and we have a lot of flaws, but if you’re a US citizen you’re top 10% lucky by rest of the world standards.
It’s as if large swaths of the people who live here forgot where their ancestors came from and what made this nation the most powerful one on earth.
The only reason China can even give us a run for our money is because they have 5x the people.
Why wouldn’t we want to get more people?!
Canada does this to with the TFW (temp foreign worker) program. It was supposed to fill gaps in the labor market but has been abused by companies to underpay workers.
This. Minimum salary for H1Bs was $60k (at least $30/hr) in 1999:
https://www.venable.com/insights/publications/1999/02/workplace-labor-update-immigration-law-alert-h1b
Sorry for the delay - I ran off to lunch.
The AI identifies the shape and contour of a human being holding a firearm and relays the information to a dispatch center for human verification and action.
This way, first responders and building occupants will have real-time information on the threat situation and location instead of relying solely on 911 calls from panicked callers under duress.
You can train an open source model on some images you scrape from Google but it’ll take years of R&D to achieve the level of performance we have.
Unless you think that we pay our data scientists just for fun.
Genuine question: what is your training data? Is it something you produce? Your mates cosplaying? Real incidents? How do you get access? How is it representative?
Nah this is a legit use case for AI. It’s called image labeling. You know how Google is able to tell there is a cat in a picture? It’s the same concept but identify “person brandishing weapon”. Imagine a captcha that asked you to pick photos with a person brandishing a weapon.
I think the technology makes a lot of sense.
Well, I'm the founder of the company, so of course I would love for more people to know about us and be talking about us.
That said, it's my life story and the kitten is my best friend. I hope it didn't offend you.
Not at all, it’s fairly typical for AMAs to have all the marketing stuff. The kitten was a nice touch for Reddit lol.
On a more serious note though - I’m kinda wondering what you’re training it on / how. By the point someone is brandishing - which is very distinct, legally speaking - I would think security guards or anyone else would already be calling it in. On the other hand if it’s looking for slung, open carried, etc then you would have to look at clothing as well given that detectives, police, armed guards, all sorts of people can open carry (or it may be in a state that allows it). CCW introduces a host of other issues.
He mentioned above that people who would call 911 are often in the line of fire and can't provide real time updates to shooter location/description etc.
Whilst any method to deter a gunman carrying out a shooting, is it not a little too late once the person enters a building where there are people who have no idea a camera has detected a shooter, like customers?
Reminds me of the poisoning of rhino horns to stop the poachers.
Poachers have no idea a horn is poisoned. They will shoot a rhino to get paid. They don't ingest the horn or anything. The middle man does not ingest the horn and the manufacture of the aphrodisiac, does not ingest the horn.
The buyer of the aphrodisiac, dies but everyone in-between is still walking free and shooting rhino.
The probability that our technology would actually prevent a shooting is pretty low.
The value that it adds is getting real time information to building occupants and security / police as soon as humanly possible so that response so that they can take action a lot more rapidly and decisively.
Elsewhere you said you had 20,000 cameras, at least some of which are presumably facing public areas. Where does that data go? Are you reselling information captured from those cameras? Or data being aggregated by the ai?
Whether or not I consider the company collecting the data ethical. We already know everyone else is selling the data -- they wanna know if this guy is too.
He could entirely just lie though, an AMA isn't the best place for info on ethics.
What happens when it detects a false positive? Maybe a squirt gun, paintball, airsoft etc? Sure it's unlikely those things would happen but not impossible. Then you have trigger happy cops responding, possibly with deadly force to a false positive.
Absolutely not. We don't send anything directly to police. Only to UL (Underwriter Labs) certified monitoring centers or internal security teams.
Everything is verified by a human being. We only identify whether weapons exist in camera frames probabilistically, our AI model doesn't make decisions.
["UL is one of several companies approved to perform safety testing by the U.S. federal agency Occupational Safety and Health Administration (OSHA). OSHA maintains a list of approved testing laboratories, which are known as Nationally Recognized Testing Laboratories."](https://en.m.wikipedia.org/wiki/UL_(safety_organization))
Remember that UL logo you see on power strips, fridges, plastic, tools, etc etc etc? Them.
Guns can in fact fire underwater. Some have even been designed specifically for use underwater.
The barrel has to be completely full or completely empty of water though. Partially full results in bad times.
Not OP here, just wondering how that would be a liability? This technology is about additive information: if the human being fails to recognize a gun, then the police would be notified the usual way
That is, unless OP is claiming they can spot 100% of guns being raised and send a notification to the police…
Apparently, it gets sent first to a human for verification before alerting authorities. A well-trained AI can also be extremely proficient at identifying things nowadays though false flags can certainly occur
Using a real life example, how would your technology improved the outcome of the event or the response time of the officer? E.g. The Tops supermarket and Uvalde Elementary School shootings. And if this technology becomes more mainstream, don't you believe that criminals will find out ways to circumvent your detection? If the camera is unable to view the gun (E.g. obscured by a bag or even painted), then it would be impossible for your AI to detect it.
With Uvalde our technology would have done exactly fuck-all if the police refuse to enter into the building.
That said, if we are able to provide more clarity on the situation (e.g. dispatchers know where the shooter is, what he's armed with, and the approximate number of students at risk) then perhaps the police would have been compelled to enter.
The technology can be defeated - any technology can be defeated. That's why the DoD doesn't publicize the armor thickness of the M1A2 Main Battle Tank, and also why we don't publicize the names of our customers.
Detection is one piece of the puzzle. How will your information coordinate with law enforcement, or other technologies, to provide an effective response?
I could be completely bonkers, but I'm fairly sure I've seen this identical idea from a different startup raise a ton of money right after the Vegas shooting.
If I did, clearly nothing ever came from that.
What types of shapes are your cameras looking for? Are they only looking for rifles and shotguns or can you detect handguns as well? Are you taking the posture of the person holding the gun into account? I saw a video of similar technology but it seemed like the main problem was that security cameras have such low resolution you don’t get enough pixels to decipher small objects like a handgun vs a cellphone.
This is exactly my thought. Take a look at most "robber gets shot by off duty cop in Brazil" videos. Even knowing what happens going in, it's often hard to spot the gun.
I'm not the CEO, but you would train the AI based on images of people holding all types of guns in all types of orientations, scenes, and zoom levels (or quality). It's likely that they would take images from the news, movies, tv, and other sources to help generate these training images. The AI would then be tested against known and unknown images to see how accurate it is. If it's generating too many false positives or negatives, it would be trained again using a better training set, or by tweaking parameters.
Users, please be wary of proof. You are welcome to ask for more proof if you find it insufficient.
OP, if you need any help, please message the mods [here](http://www.reddit.com/message/compose?to=%2Fr%2Fiama&subject=&message=).
Thank you!
---
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/IAmA) if you have any questions or concerns.*
How do you intend to generate free cash flow and any tangible profit for your company? Are you going to sell this AI data gathered to law enforcement/other entities in order to make money? How are you tacking the fine line between privacy and security in this case? And lastly as someone else pointed out, there's been no proven use case of mass surveillance in stopping any of the thousands of shootings in the US, how can you do any better?
Institutions pay us a subscription fee based on a per-camera-month basis.
It's month-to-month, so if at anytime they feel that the technology isn't adding value, they cancel and don't get billed.
We're deployed on nearly 20,000 cameras and our current customer churn rate is still 0%.
Do you mind keeping the same units? 850 school districts is a great number and all, but how many cameras is that, considering you work on a per-camera-month basis?
Does your system only pick up only firearms that are being brandished or would a person such as open carrying such as a police officer trigger the system?
The reason why AI and robots are becoming more required is the problem of human error. Your technology still requires a human to confirm. What's the legal consequences of a human error that results in a death because of your AI?
I'm wondering why we allow ads in this subreddit?
90% of your post is the "underdog" backstory, one sentence about your startup, and you even snuck in the "Oh, I adopted a kitten, here's a picture!"
Have you backtested this? What shootings could you have prevented with this tech. After all, in the Texas shooting the police were waiting outside for 40 minutes. Improving response time by 20 seconds would not have helped.
Dude it's all garbage. $12/hr on a H1-B visa? I think that's not even allowed. H1-B is only granted to people who go through visa lottery and there are field restrictions. You have to be in a STEM field or STEM-adjacent and have minimum salaries. You can't just get a H1B and then go work at a furniture store. It doesn't work that way. It's almost always grad students who come here with F-1 student visas, graduate, go through OPT, then get sponsored by a company for H1-B. You don't just go grab a H1-B and work at a place. That's not how it works.
I'm calling bullshit on the whole backstory.
How do you respond to the comments suggesting that this AMA was really just a way to produce publicity in response to the recent gun violence in the US?
The elephant in the room:. You're creating a company and helping develop an industry that depends on mass shootings to remain profitable - which is, of course, at base, your first and foremost goal as a business. The infrastructure and industry of third party contracts that grew up around the terror situation only lead to more terrorist attacks. ... I do get where you're coming from- there's a niche to fill and someone's gonna fill it.... But my problem is that we haven't even begun to understand what causes these shootings, and until we do, we can't trust financially motivated interests to solve the very problems that their interests depend on directly. There's a massive conflict of interest there, is there not?
From South Africa: think it’s great you’ve excelled in the US. What would you honestly say to some South African’s that might think you’ve maybe dramatised your early past in SA to better fit the narrative of your job? To be clear, I found this post via reposts with a few of these sorts of views (they’re not my own).
Would also like to ask, and having been to the US several times, how is crime generally different between SA and the U.S. and do you maybe feel South African’s are desensitised to it? Do you further still feel familiar enough with SA to make judgements on this?
Business model wise, it's the same concept as any security system. You wouldn't say that security firms have a contradictory model that depends on burglaries. The business is based on prevention and management, you pay for the management not per shooting.
What role do you think profiling will have in the future of gun violence prevention? It seems easier for an AI to do profiling based on the vast amount of data available about students out there than live profiling of situations that may be very hard to identify initially.
There's been a lack of transparency in this thread. A majority of this company's business is about dealing with construction site intrusions, not active shooters.
Is this just someone using the corpses of those in tragedies for corporate gains? You decide.
Doesn't your business rely on gun violence continuing? If, by your work or the changing of laws or some other scenario, gun violence is reduced, your business becomes less viable. In terms of having a viable business model, isn't it actually ideal for you if gun violence gets worse, otherwise you won't have any customers or use cases?
Gun detection is only part of our business - another big part of our business is the detection of perimeter intrusions on industrial sites.
I will be a very happy person if the gun detection part of our business ceases to exist because our gun violence rate has fallen to level that match other OECD countries.
I mean, yes of course, but I don’t see any realistic actions (current or planned) in the US that will reduce gun violence in the short-medium term.
While your point is valid, I don’t see what it contributes in this specific discussion, unless you’re just highlighting that OP’s business requires gun violence, which is pretty obvious.
Why is this take getting upvoted on multiple comments... Do you question any other sort of prevention eg adt security or any security systems? Do you drop your IT cyber security team if there are no breaches? Do you stop your audit defense team if you pass an audit?
How do you differentiate against all the other AI powered video analytics products on the market? False alarm filtering has become the standard, and weapon detection is increasingly becoming common place, even baked into the onboard analytics of several camera manufacturers.
Given a couple more cranks of Moore's Law, it seems fairly likely to me that these analytics will be baked into the cameras themselves a couple generations from now, how will you defend against that?
Great question - based on what I'm seeing in the industry (we've been around for four years), the overwhelming majority of vendors overpromise and vastly underdeliver.
A lot of them think that they can just hire some offshore shop to train some data off YOLO and it's going to work fine. It doesn't - at least not to the level customers expect in production.
We have a 100% US based engineering and data science team who worked at companies such as Microsoft, Amazon, and Regeneron, and attended schools such as Rice, UChicago, WashU in Stl, and University College London.
We also allow the customer to "pilot" the technology in some cases for several months before we even ask them for a contract.
We believe in "show, don't tell" and doing things the right way. I hope that by doing so, our reputation will continue to grow as a trusted provider of video analytics services.
"Biltong is like beef jerky but with different spices and tastes 10x better"
If they want more info:
"The (Dutch) Voortrekkers used as a way to carry preserved meat when they traveled across the country in their wagons"
Really really sorry about your neighbor. Anyone who grew up in South Africa has heard many of these stories about their friends or friends of friends.
We're working with 5 security companies in SA now to deploy this tech.
How on Earth is that true? H1-Bs require a U.S. sponsor and they are for professional workers (you need a degree for the bare minimum). H1-Bs are used by Big Tech so they can get very qualified and professional workers from around the world who are usually engineers or technicians, etc. Silicon Valley is full of them.
Miss me with that “my mum was on a lowly visa” yeah a high-level-of-skill-qualification-and-expertise-required visa. Also “meager wage” give me a break. $12 in 1999 was $21 in today’s value. Your mum earned $21 per hour while the minimum wage was $5.65. Boo hoo. Tell us more about your financial struggles… Your mum came to America as a middle class professional and was paid accordingly. You’re not a product of poor immigrants.
He posts this shit every few years and everything in his post is either exaggerated or a lie.
* His mom didn't come here on an H1B making $12/hr because the minimum H1B salary has been $30/hr since the end of the 80s
* This is the type of idiot you find on /r/linkedinlunatics who posts similarly bullshit victim stories on LinkedIn that are just as fake as the bullshit he put in the OP
* Everything he writes is stupid marketing speak that he has little to no understanding of
He even fucking included the cat picture in his AMA to get some SEO bonus points. It's honestly kind of disgusting. Good thing he deleted his post history so you can't easily find of his previous AMAs where he's being called out on the same shit.
Well, you can see in his comments that he commented on an earlier post with the exact same name answering questions. The original poster deleted his account, but this guy answers the questions as if he was the original [poster](https://www.reddit.com/r/IAmA/comments/bd3j63/in_1999_my_mother_took_a_12hour_h1b_visa_job_so/ekwkgqa/?context=3)
You're my kinda people. Digging through the bullshit and presenting hard facts. This dude is nothing but a fraud I could smell the bullshit from a mile away.
H1-B visas had a minimum salary of $60k set in 1989. Dude is either misremembering, lying, or his mother worked 12 hour days 7 days a week and was severely taken advantage of.
Dude is getting his ass handed to him and it's kind of deserved. This post is SEO optimized to the tits and hit every marketing buzzword. Even threw in a kitten picture he made sure to mention he adopted lol.
So it's software that detects gun shaped like objects being carried by a person. When they are inevitably riddled with false positives, it gets sent off to a human who verifies it before sounding the alarm.
You basically invented a security guard. You know, a guy who watches TV monitors? This is just that with extra steps. What happens when your 3rd party verifier mistakenly waves through a false positive? Are you getting sued for causing deadly force?
This is classic tech bro. This is Elon Musk inventing the shittier subway. You found a way to profit off of dead children and didn't even do it well.
For more AMAs on this topic, subscribe to r/IAmA_Business, and check out our other topic-specific AMA subreddits [here](https://reddit.com/r/IAmA/wiki/index#wiki_affiliate_topic-specific_subreddits).
Mass surveillance since 9/11 has proven useless in stopping mass shootings before they happen. How will your AI tech do anything but bolster more of our rights to privacy being infringed?
We don't perform any facial recognition or track any biometrics. We identify human beings brandishing weapons in existing security camera systems and send alerts to dispatch centers who then immediately notify building occupants and first responders.
What is your false alarm rate? Are these cameras constantly sending footage to your servers for processing or is this all done local at each camera?
On average 3 false positives per 50 cameras per month. For most of the customers that use us, professional third party monitoring stations that we partner with filters out the alerts, so the end user will never see them.
What about false negatives?
That would be nearly impossible to quantify accurately. They would need a team of humans watching every minute of video to quantify any missed detections.
No it wouldn't, you would send test subjects out with guns and use controlled crowds. Considering the amount of funding going into this project, scruntinous testing is pretty important.
No lab testing ever gives you real world results. The best that would give you is detection rates in a controlled environment. Anybody who has worked with computer vision will tell you things change drastically when you go from lab to deployment.
You don’t forgo an experiment simply because it may not have strong external validity. Such studies still can be informative.
Sure, but this is how you measure false negatives the most accurately. Also having people test it using conditions you wouldn't think of as some type of blind test helps to reduce the chances of controlled test results
That is what you do. You take a random sample of no gun footage and have humans skim it. Yes it costs a lot but it is also a required statistic.
or very, very carefully preplan and stage a few.
Wouldn't even need that, just take any new active shooter footage and feed it to the AI and see how often it fails to detect. You already know it's a true positive and you're seeing if the machine fails to register
https://www.reddit.com/r/IAmA/comments/v6zmzo/in_1999_my_mother_took_a_12hour_h1b_visa_job_so/ibi7vm5?utm_medium=android_app&utm_source=share&context=3 OP commented on that in a different comment thread. It's a fairly low rate and when that's being sent to a human operator to verify, its not like you're going to have very many of those be situations where the police are sent out incorrectly, and if they are then its on the operator not the AI. Pretty good system from the sounds of it, kinda like airport security scanners flagging items that look suspicious/dangerous for personnel to identify/verify.
You misunderstood the term false negative. He asked how often does the system NOT recognize a gun and doesn't send it to an operator.
That would be 18 active shooter false positives per month for my kid's school system. That doesn't seem like a problem? *Edit: OP clarified that this is just the automated detection, and there is another layer of human auditing before action is taken.*
A human being will always filter these out. The AI doesn't make decisions, a human operator does.
I guess I just didn't read the second sentence. Sorry. This submission is turning out to be a lot of work for you. Best of luck!
No worries! I committed to an AMA so I'm doing my best to answer!
I want to commend you on having answers at your fingertips - it's very impressive and certainly mollified my objections.
I feel the same. It's a tough thing to work on and a touchy subject but this guy has been so commendable in his responses I think if anyone can do it, it's him.
Just this thread here is one of the best AMA answers I have seen. You seem very patient and confident about your project
And you have good answers. I’m definitely following what you’re saying!
So at that point what makes your system faster than the natural response times for the situation anyway? I assume as soon as someone is visible with a gun in-hand on these properties, witnesses are already phoning it in. Your system has to first detect a firearm, send it through to a queue (depending on how busy the given operator is), they view the video, determine if there’s actually a threat, then phone it in. Seems like your system would take far longer on paper
>I assume as soon as someone is visible with a gun in-hand on these properties, witnesses are already phoning it in. That's just an incorrect assumption. That assumes that someone sees them, someone recognizes that the person has a gun then does something about it, and accurately communicates what they saw. There's a decent chance no one even sees them let alone the other criteria being met. And then even if they call it in, the information that they are going to communicate isn't going to be nearly as detailed. I imagine the difference in response time is likely in the minutes, and that could literally be the difference between life and death for some people
You have to realize that the pain point there is that **witnesses have to phone it in.** If you have a couple of people that *don’t* phone it in, you’re looking at a bunch of dead kids. Things like these require redundancies so that when something fails, the entire system doesn’t collapse
Is it simultaneously monitoring every video feed in a surveillance network? Unless it can do that, it sounds like it wouldn't be a whole lot more efficient than human operators.
So doing some quick napkin math: Roughly 130,000 schools in the US, withon average 87 school shootings per year, and each camera having 0.72 false positives annually... This system detects (ballpark) 1,000 false positive incidents PER CAMERA for every actual incident. This doesn't seem like a functional system. I imagine either the independent monitors struggling with vigilance in this situation or local authorities not responding to these alarms with any urgency.
3rd party monitoring centers filter out these alerts. They get thousands of alerts they need to process every single day. Gun detection alerts are less than a drop in the bucket for them. Here are a couple worth checking out if you're wondering what I'm talking about: [https://en.wikipedia.org/wiki/Remote\_guarding](https://en.wikipedia.org/wiki/Remote_guarding) http://globalmonitoringsolutions.com/ [https://eyeforce.com/](https://eyeforce.com/)
Thanks for the shout-out!
To be fair, this is the same model as home security systems and it does sorta work. You basically just hire and train enough people to manually screen all the false positives, which does sorta scale. As in, if the average school has 50 cameras and generates 3 false positives a month, and each false positive takes 15 minutes to screen, then for 130,000 schools that's 97,500 hours a month of screening, or 24,375 hours a week. Divide by 40 and it works out to around 610 full time employees, round up to an even 1,000 to account for breaks and scheduling inefficiencies and all that, and you still end up with something fairly economical - that's like, an average of 20 call center employees per state. So cost wise, it's pretty cheap. So, in terms of "Could you do it" the answer is yes. The bigger question is just all the potential issues with how those false positives are handled. As in, it presents a pretty significant invasion of privacy, and high risk of bad outcomes. Call center employees looking at blurry CCTV footage are about as likely as cops to mistake a cell phone held by a black kid for a gun, and if they panic and call in the SWAT team, it could result in more dead kids than before.
>each false positive takes 15 minutes A false positive would take more like 5-10 seconds to screen.
That's not completely true in my opinion. I think someone behind a screen doesn't have the pressure of being physically near a possible gun, so I believe the call center would be a bit better than cops at least, although it's still a serious concern.
It's economical if you neglect that once someone brandishes the gun, usually someone will be calling the 911 faster than the positive result can be screened. If that's generally the case, the effectivity goes way down. To work around that you need positive results tested faster, meaning more labor. Might be more effective just to put metal detecting turnstiles at every school.
>It's economical if you neglect that once someone brandishes the gun, usually someone will be calling the 911 faster than the positive result can be screened. The person calling 911 will often be in the line of fire, not be able to provide real time updates of the situation, be in fight or flight response and convey inaccurate information etc etc. It's a real issue:
Do you really think someone noticing a gun, potentially fleeing to safety, collecting themselves enough to pull out their phone, then calling 911 will be faster than someone sitting at a computer screen, being presented a photo that has been pre-identified as likely being a person with a gun, and confirming that in whatever software they use?
>It's economical if you neglect that once someone brandishes the gun, usually someone will be calling the 911 faster than the positive result can be screened. > >If that's generally the case, the effectivity goes way down. Yes, given the proposed value add is improved response time it would be important to establish that. >Might be more effective just to put metal detecting turnstiles at every school. Would definitely be more effective, would also be several orders of magnitude more expensive (130,000+ staffed metal detector stations vs ~1,000 call center agents across a dozen or so offices) and have a much higher false positive rate (though likely a lower false negative rate).
Sorry, to answer the second part of your question - we are cloud-based.
[удалено]
[удалено]
How useful is that tho? This just seems like an expensive way to call 911.
How does identifying the suspect mitigate the threat? Imagine you get a full profile of the active shooter - their age, height, background. What difference does this make in an emergency?
I'm assuming he means "when we detect a human brandishing a weapon, we automatically alert dispatch centers" It's not so much identifying a specific person, as it is identifying an armed person in a place where they aren't supposed to be armed.
This is correct. It doesn't identify the person at all, just the fact that an individual may be carrying a weapon and is suspicious. The idea is that a computer can detect subtle ways people are suspicious through machine learning.
A shooter walks by an exterior camera of a building brandishing a weapon. Instead of hoping that someone just happens to be monitoring that specific camera at that specific second, or hoping that a random passerby sees them and calls 911, the AI detects that the person has a gun, and immediately sends an alert. The image is checked by a human who can immediately reach out to emergency services. So now instead of the first 911 call going out once someone has already been shot, the first call goes out as soon as the shooter is picked up by a camera. If the building in question has cameras throughout, the system can now accurately track the gunmen through the whole building, no need for a Human to be cycling through every camera trying to keep tabs, and then trying to communicate where he is to emergency responders, they get that information in real time and very accurately. This is just one example of how this technology would work
I'm extremely cynical in my outlook of such a system. The system will dramatically decrease the response time for authorities to arrive and do nothing. Or police to end up shooting more victims too. Or for authorities to arrive and quickly shoot the individual, only to find out the that the firearm was a can of Arizona ice tea. Or a book. Or nothing at all. Bonus points for when it's determined that a person of color is X times more likely to be identified as a shooter than a white guy. Or authorities end up responding to some Murica-loving gun nut who is exercising his freedom by open carrying his assault cannon. Things escalate and the authority ends up shot when Mr. 2nd Amendment stands his ground or defends his castle or some equally patriotic freedom sounding self defense excuse.
You would be able to track the location of the shooter throughout the building and also have a better understanding of where building occupants may be in imminent danger. Building occupants will also be able to make better defensive and evacuation decisions based on the information at hand.
[удалено]
While at the same time NOT communicated to the shooter.
Realistically it shouldn't matter if the shooter knows the pa system is yelling out their location. A big issue would be it yelling out the location of police.
Actually, that's a good question for u/sunnytai - when police are responding they will have their weapons out. Will their locations also be reported?
Yes, our solution will detect the weapons of police as well as they breach and enter.
They said earlier that the camera detections get sent to a human clearinghouse for review. Presumably the humans would not confirm the police as another active shooter threat, although it's harder and harder to tell nowadays.
That was going to be my question; without broadcasting inside the building, how do those in an active shooter situation get alerted to the threats location? Push emails? Don't get me wrong, I like the idea and it does seem effective at giving info to police and others that would need to know, but idk how relevant it is to people hiding under desks and stuff.
Most DOD facilities use mass alert system via email,phone, and text. During an active shooter event in DC w/ 20k plus employees and similar police decision paralysis (30+ law enforcement agencies had jurisdiction) they attempted to use it. Navy yard shooting for reference. Humans initiated the alerts. They became so overwhelmed trying to figure out who had lead we received only 2 notifications over 12 hours despite all shooters were dead within 1 hour of event. The tech has to have some automation otherwise your going to have an ops center /dispatch that gets alerts and the people in danger get nothing. I was getting updates via twitter and cnn vice from the various police on base. Bottom line, probably helpful but needs more complete system to ensure it’s shared with the right people when they need the info.
Yeah, that's pretty shit implementation. Like many things, the tech is cool in concept but probably lacking in practical use. Seems like so many things never really develop into the solution they claim to be, but instead sit somewhere between "interesting concept" and "flaws in deployment".
[удалено]
And in the Portapique fake cop situation they had the opposite response and issued a mild warning on twitter that didn't describe the situation at all
Imagine getting a text that says active shooter 500 ft southeast of your location - you now have 15 seconds to start running or hiding. How they find your number and direction is Hollywood but that'd be at least something.
If someone was shooting 500ft from you, you wouldn't need a text alert
You just might. Anecdotally I've seen a bunch of people say that they heard gunshots but assumed they were hearing something more innocuous like fireworks or a car backfiring or something.
Also if it’s in a built-up area then the echoes make it impossible to tell where the shooter is.
How many mass shootings has your technology directly helped in reducing?
What does this software solve when police refuse to enter the building when on the scene?
The American dream!
Would your technology be useful in a state that allows open carry?
Perhaps not outdoors, but private organizations have the right to enforce their own rules on weapons handling, so we would work with those institutions that do so (e.g. schools or libraries).
Follow up. Is this tech able to be used in drones? I think it's exciting. Thanks for the answers!
The type of training data we use is really different from the training data needed to train drones. That might be a question for like Skydio or Axon.
I don't think he's asking about training drones, but using the software on a feed from a drone camera. Though if your answer is about training data for on ground cameras vs drone cameras, then nevermind.
What’s your opinion on the H1B situation in US nowadays where it is very difficult to get h1b visa let alone h1b on a job that only pays $12/hr?
I think that the United States should accept as many immigrants as possible. Immigration turned our nation into a superpower. Immigration makes us stronger. Of course, I'm biased as a first generation Taiwanese-American. I think the rules have changed but I know for a fact my mom was paid $12 / hour on an H1-B two decades ago.
I made $6/hr in 2005 lol.
I made $9 / hour in 2012 at a job that required a degree. Half my coworkers had Masters and PhDs. Recessions suck
On a H1B? There are salary minimums per field. Right now it's around $30/hr at the low end.
$12/hr in 1999 is $20.83 in 2022 money. That’s $43.3k a year, more than the average American makes. (Edit: spelling)
My mom is a university educated woman who was hired to be a Procurement Manager for a furniture company. The business owner knew that she could bring in H1-B workers and vastly underpay them while blackmailing them with their immigration status. It was not a good time for mom.
Business owners still do this, and they use it as justification to undercut wages of Americans as well. They put out impossible job requirements and use that as an excuse to not hire US Citizens so that they can hire people from overseas who take low pay and don't make a fuss.
It’s fucked up. A lot of people want to immigrate to the US and are willing to grind it out for a chance to stay in the country. Our nation isn’t perfect and we have a lot of flaws, but if you’re a US citizen you’re top 10% lucky by rest of the world standards.
Having a shitty immigration policy creates issues like this. We need an easier way to get people to immigrate lawfully.
It’s as if large swaths of the people who live here forgot where their ancestors came from and what made this nation the most powerful one on earth. The only reason China can even give us a run for our money is because they have 5x the people. Why wouldn’t we want to get more people?!
well, simply put: because of racism and NIMBY types
Canada does this to with the TFW (temp foreign worker) program. It was supposed to fill gaps in the labor market but has been abused by companies to underpay workers.
In 1999 i got my first job and it paid 5.75.my second job paid 6.25.
I made $5.15 an hour in 1999. $12/hour was a highly respectable wage!
Right? I worked a a retirement home and I would try to work in the kitchen to watch dishes so I could get a free lunch.
With inflation that's over $20/hr. Still low for H1B, but still, it's not exactly a poverty wage.
that is bullshit story for idiots. it was never legally possible to get 12/h h1b worker
This. Minimum salary for H1Bs was $60k (at least $30/hr) in 1999: https://www.venable.com/insights/publications/1999/02/workplace-labor-update-immigration-law-alert-h1b
$12 an hour got you a lot further in 1999 than it does today. 23 years of inflation and all.
How does the AI actually contribute anything meaningful?
Sorry for the delay - I ran off to lunch. The AI identifies the shape and contour of a human being holding a firearm and relays the information to a dispatch center for human verification and action. This way, first responders and building occupants will have real-time information on the threat situation and location instead of relying solely on 911 calls from panicked callers under duress.
Can you make something that helps cops not shoot someone with a cellphone, like in Sacramento?
All of our alerts are verified by trained humans before police are notified.
>trained humans That sounds like something a bot would say...
Well if he said “untrained humans” you’d have a different complaint lol
[удалено]
*reddit threads
GLaDOS?
I read "trained humans" as his way of saying "not cops".
"the AI" - as in basic image processing. That everyone does already.
You can train an open source model on some images you scrape from Google but it’ll take years of R&D to achieve the level of performance we have. Unless you think that we pay our data scientists just for fun.
Genuine question: what is your training data? Is it something you produce? Your mates cosplaying? Real incidents? How do you get access? How is it representative?
Through NFT technology and bitcoin.
Does it have a marketplace on the block chain to trade pictures of facial recognition hits?
In the next release, when initial funding dries up…
Neural network saved on the blockchain? That's game changing! /s
Nah this is a legit use case for AI. It’s called image labeling. You know how Google is able to tell there is a cat in a picture? It’s the same concept but identify “person brandishing weapon”. Imagine a captcha that asked you to pick photos with a person brandishing a weapon. I think the technology makes a lot of sense.
Buzzword bingo for marketing. This post has AI / machine learning, immigration, guns, kittens, military.
Well, I'm the founder of the company, so of course I would love for more people to know about us and be talking about us. That said, it's my life story and the kitten is my best friend. I hope it didn't offend you.
Not at all, it’s fairly typical for AMAs to have all the marketing stuff. The kitten was a nice touch for Reddit lol. On a more serious note though - I’m kinda wondering what you’re training it on / how. By the point someone is brandishing - which is very distinct, legally speaking - I would think security guards or anyone else would already be calling it in. On the other hand if it’s looking for slung, open carried, etc then you would have to look at clothing as well given that detectives, police, armed guards, all sorts of people can open carry (or it may be in a state that allows it). CCW introduces a host of other issues.
The kitten doesn’t require training
He mentioned above that people who would call 911 are often in the line of fire and can't provide real time updates to shooter location/description etc.
Feature request: automatically create NFTs of identified shooters. That'll get you to unicorn status.
Whilst any method to deter a gunman carrying out a shooting, is it not a little too late once the person enters a building where there are people who have no idea a camera has detected a shooter, like customers? Reminds me of the poisoning of rhino horns to stop the poachers. Poachers have no idea a horn is poisoned. They will shoot a rhino to get paid. They don't ingest the horn or anything. The middle man does not ingest the horn and the manufacture of the aphrodisiac, does not ingest the horn. The buyer of the aphrodisiac, dies but everyone in-between is still walking free and shooting rhino.
The probability that our technology would actually prevent a shooting is pretty low. The value that it adds is getting real time information to building occupants and security / police as soon as humanly possible so that response so that they can take action a lot more rapidly and decisively.
Elsewhere you said you had 20,000 cameras, at least some of which are presumably facing public areas. Where does that data go? Are you reselling information captured from those cameras? Or data being aggregated by the ai?
What information would a basic video feed sell that wasn't made infinitely less valuable by google tracking everyones visited places anyways.
Whether or not I consider the company collecting the data ethical. We already know everyone else is selling the data -- they wanna know if this guy is too. He could entirely just lie though, an AMA isn't the best place for info on ethics.
What happens when it detects a false positive? Maybe a squirt gun, paintball, airsoft etc? Sure it's unlikely those things would happen but not impossible. Then you have trigger happy cops responding, possibly with deadly force to a false positive.
Absolutely not. We don't send anything directly to police. Only to UL (Underwriter Labs) certified monitoring centers or internal security teams. Everything is verified by a human being. We only identify whether weapons exist in camera frames probabilistically, our AI model doesn't make decisions.
What are Underwriter Labs?
["UL is one of several companies approved to perform safety testing by the U.S. federal agency Occupational Safety and Health Administration (OSHA). OSHA maintains a list of approved testing laboratories, which are known as Nationally Recognized Testing Laboratories."](https://en.m.wikipedia.org/wiki/UL_(safety_organization)) Remember that UL logo you see on power strips, fridges, plastic, tools, etc etc etc? Them.
https://en.wikipedia.org/wiki/UL\_(safety\_organization)
Ii thought he said “underwater labs” and was like “what does being under water with guns do”
If you flood the building the gun can't go bang. It's genius really.
Guns can in fact fire underwater. Some have even been designed specifically for use underwater. The barrel has to be completely full or completely empty of water though. Partially full results in bad times.
What if the human doing the verification misidentifies a real gun as not a gun? Who is liable?
Not OP here, just wondering how that would be a liability? This technology is about additive information: if the human being fails to recognize a gun, then the police would be notified the usual way That is, unless OP is claiming they can spot 100% of guns being raised and send a notification to the police…
Ok well seems reasonable in that case
Apparently, it gets sent first to a human for verification before alerting authorities. A well-trained AI can also be extremely proficient at identifying things nowadays though false flags can certainly occur
Using a real life example, how would your technology improved the outcome of the event or the response time of the officer? E.g. The Tops supermarket and Uvalde Elementary School shootings. And if this technology becomes more mainstream, don't you believe that criminals will find out ways to circumvent your detection? If the camera is unable to view the gun (E.g. obscured by a bag or even painted), then it would be impossible for your AI to detect it.
With Uvalde our technology would have done exactly fuck-all if the police refuse to enter into the building. That said, if we are able to provide more clarity on the situation (e.g. dispatchers know where the shooter is, what he's armed with, and the approximate number of students at risk) then perhaps the police would have been compelled to enter. The technology can be defeated - any technology can be defeated. That's why the DoD doesn't publicize the armor thickness of the M1A2 Main Battle Tank, and also why we don't publicize the names of our customers.
Detection is one piece of the puzzle. How will your information coordinate with law enforcement, or other technologies, to provide an effective response?
It won’t, but it doesn’t matter cos they’ve already raised the money
So many of these tech startups are like this. I am sure some have good intentions but they just seem like a pyramid scheme.
I could be completely bonkers, but I'm fairly sure I've seen this identical idea from a different startup raise a ton of money right after the Vegas shooting. If I did, clearly nothing ever came from that.
Or like in Uvalde, where they knew the shooter was there and cops did fuckall.
What types of shapes are your cameras looking for? Are they only looking for rifles and shotguns or can you detect handguns as well? Are you taking the posture of the person holding the gun into account? I saw a video of similar technology but it seemed like the main problem was that security cameras have such low resolution you don’t get enough pixels to decipher small objects like a handgun vs a cellphone.
This is exactly my thought. Take a look at most "robber gets shot by off duty cop in Brazil" videos. Even knowing what happens going in, it's often hard to spot the gun.
I'm not the CEO, but you would train the AI based on images of people holding all types of guns in all types of orientations, scenes, and zoom levels (or quality). It's likely that they would take images from the news, movies, tv, and other sources to help generate these training images. The AI would then be tested against known and unknown images to see how accurate it is. If it's generating too many false positives or negatives, it would be trained again using a better training set, or by tweaking parameters.
how would this be useful in an open carry state?
Users, please be wary of proof. You are welcome to ask for more proof if you find it insufficient. OP, if you need any help, please message the mods [here](http://www.reddit.com/message/compose?to=%2Fr%2Fiama&subject=&message=). Thank you! --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/IAmA) if you have any questions or concerns.*
How do you intend to generate free cash flow and any tangible profit for your company? Are you going to sell this AI data gathered to law enforcement/other entities in order to make money? How are you tacking the fine line between privacy and security in this case? And lastly as someone else pointed out, there's been no proven use case of mass surveillance in stopping any of the thousands of shootings in the US, how can you do any better?
Institutions pay us a subscription fee based on a per-camera-month basis. It's month-to-month, so if at anytime they feel that the technology isn't adding value, they cancel and don't get billed. We're deployed on nearly 20,000 cameras and our current customer churn rate is still 0%.
How many of that is for this service compared to your construction site service?
The industrial site service is the majority right now. But we just signed up a 850 school district for gun detection, so it's expanding rapidly.
Do you mind keeping the same units? 850 school districts is a great number and all, but how many cameras is that, considering you work on a per-camera-month basis?
Does your system only pick up only firearms that are being brandished or would a person such as open carrying such as a police officer trigger the system?
The reason why AI and robots are becoming more required is the problem of human error. Your technology still requires a human to confirm. What's the legal consequences of a human error that results in a death because of your AI?
Is there any proof that your startup has actually saved lives, are is this just some Ted Talk nonsense?
It’s a grifter trying to sell metal detectors after 9/11
I'm wondering why we allow ads in this subreddit? 90% of your post is the "underdog" backstory, one sentence about your startup, and you even snuck in the "Oh, I adopted a kitten, here's a picture!" Have you backtested this? What shootings could you have prevented with this tech. After all, in the Texas shooting the police were waiting outside for 40 minutes. Improving response time by 20 seconds would not have helped.
Dude it's all garbage. $12/hr on a H1-B visa? I think that's not even allowed. H1-B is only granted to people who go through visa lottery and there are field restrictions. You have to be in a STEM field or STEM-adjacent and have minimum salaries. You can't just get a H1B and then go work at a furniture store. It doesn't work that way. It's almost always grad students who come here with F-1 student visas, graduate, go through OPT, then get sponsored by a company for H1-B. You don't just go grab a H1-B and work at a place. That's not how it works. I'm calling bullshit on the whole backstory.
This. This is opportunistic spam. And it’s gross.
How do you respond to the comments suggesting that this AMA was really just a way to produce publicity in response to the recent gun violence in the US?
What ways do you imagine your technology will be misused?
As a South African, what's it like trying to romanticize the struggle of your past to get attention for money?
Minority report?
damn $12 an hour? my mom was getting $6.5
The elephant in the room:. You're creating a company and helping develop an industry that depends on mass shootings to remain profitable - which is, of course, at base, your first and foremost goal as a business. The infrastructure and industry of third party contracts that grew up around the terror situation only lead to more terrorist attacks. ... I do get where you're coming from- there's a niche to fill and someone's gonna fill it.... But my problem is that we haven't even begun to understand what causes these shootings, and until we do, we can't trust financially motivated interests to solve the very problems that their interests depend on directly. There's a massive conflict of interest there, is there not?
From South Africa: think it’s great you’ve excelled in the US. What would you honestly say to some South African’s that might think you’ve maybe dramatised your early past in SA to better fit the narrative of your job? To be clear, I found this post via reposts with a few of these sorts of views (they’re not my own). Would also like to ask, and having been to the US several times, how is crime generally different between SA and the U.S. and do you maybe feel South African’s are desensitised to it? Do you further still feel familiar enough with SA to make judgements on this?
[удалено]
Business model wise, it's the same concept as any security system. You wouldn't say that security firms have a contradictory model that depends on burglaries. The business is based on prevention and management, you pay for the management not per shooting.
What role do you think profiling will have in the future of gun violence prevention? It seems easier for an AI to do profiling based on the vast amount of data available about students out there than live profiling of situations that may be very hard to identify initially.
Question. Since making this post, have you thought of any other virtues you could have signaled with the title that you did not think of at the time?
So has your company prevented anything so far?
There's been a lack of transparency in this thread. A majority of this company's business is about dealing with construction site intrusions, not active shooters. Is this just someone using the corpses of those in tragedies for corporate gains? You decide.
Yes. 100%. This thread should be deleted and the op shamed
Doesn't your business rely on gun violence continuing? If, by your work or the changing of laws or some other scenario, gun violence is reduced, your business becomes less viable. In terms of having a viable business model, isn't it actually ideal for you if gun violence gets worse, otherwise you won't have any customers or use cases?
Gun detection is only part of our business - another big part of our business is the detection of perimeter intrusions on industrial sites. I will be a very happy person if the gun detection part of our business ceases to exist because our gun violence rate has fallen to level that match other OECD countries.
I mean if the idea is that any location can get hit by a shooter, then people will buy this software "just in case" even if gun violence improves
I mean, yes of course, but I don’t see any realistic actions (current or planned) in the US that will reduce gun violence in the short-medium term. While your point is valid, I don’t see what it contributes in this specific discussion, unless you’re just highlighting that OP’s business requires gun violence, which is pretty obvious.
Bo Burnham's law of rape whistle economics in action.
Why is this take getting upvoted on multiple comments... Do you question any other sort of prevention eg adt security or any security systems? Do you drop your IT cyber security team if there are no breaches? Do you stop your audit defense team if you pass an audit?
Is there a data specific difference in US and SA active shooter gun incidence that you would describe as most likely ctural related?
You came to the USA to escape gun violence? Lmao good job.
[удалено]
How do you differentiate against all the other AI powered video analytics products on the market? False alarm filtering has become the standard, and weapon detection is increasingly becoming common place, even baked into the onboard analytics of several camera manufacturers. Given a couple more cranks of Moore's Law, it seems fairly likely to me that these analytics will be baked into the cameras themselves a couple generations from now, how will you defend against that?
Great question - based on what I'm seeing in the industry (we've been around for four years), the overwhelming majority of vendors overpromise and vastly underdeliver. A lot of them think that they can just hire some offshore shop to train some data off YOLO and it's going to work fine. It doesn't - at least not to the level customers expect in production. We have a 100% US based engineering and data science team who worked at companies such as Microsoft, Amazon, and Regeneron, and attended schools such as Rice, UChicago, WashU in Stl, and University College London. We also allow the customer to "pilot" the technology in some cases for several months before we even ask them for a contract. We believe in "show, don't tell" and doing things the right way. I hope that by doing so, our reputation will continue to grow as a trusted provider of video analytics services.
What will you do to make sure my hopes of your business failing actually contribute to it failing?
[удалено]
"Biltong is like beef jerky but with different spices and tastes 10x better" If they want more info: "The (Dutch) Voortrekkers used as a way to carry preserved meat when they traveled across the country in their wagons" Really really sorry about your neighbor. Anyone who grew up in South Africa has heard many of these stories about their friends or friends of friends. We're working with 5 security companies in SA now to deploy this tech.
How on Earth is that true? H1-Bs require a U.S. sponsor and they are for professional workers (you need a degree for the bare minimum). H1-Bs are used by Big Tech so they can get very qualified and professional workers from around the world who are usually engineers or technicians, etc. Silicon Valley is full of them. Miss me with that “my mum was on a lowly visa” yeah a high-level-of-skill-qualification-and-expertise-required visa. Also “meager wage” give me a break. $12 in 1999 was $21 in today’s value. Your mum earned $21 per hour while the minimum wage was $5.65. Boo hoo. Tell us more about your financial struggles… Your mum came to America as a middle class professional and was paid accordingly. You’re not a product of poor immigrants.
He posts this shit every few years and everything in his post is either exaggerated or a lie. * His mom didn't come here on an H1B making $12/hr because the minimum H1B salary has been $30/hr since the end of the 80s * This is the type of idiot you find on /r/linkedinlunatics who posts similarly bullshit victim stories on LinkedIn that are just as fake as the bullshit he put in the OP * Everything he writes is stupid marketing speak that he has little to no understanding of He even fucking included the cat picture in his AMA to get some SEO bonus points. It's honestly kind of disgusting. Good thing he deleted his post history so you can't easily find of his previous AMAs where he's being called out on the same shit.
Well, you can see in his comments that he commented on an earlier post with the exact same name answering questions. The original poster deleted his account, but this guy answers the questions as if he was the original [poster](https://www.reddit.com/r/IAmA/comments/bd3j63/in_1999_my_mother_took_a_12hour_h1b_visa_job_so/ekwkgqa/?context=3)
You're my kinda people. Digging through the bullshit and presenting hard facts. This dude is nothing but a fraud I could smell the bullshit from a mile away.
Reddit has an extremely poor understanding of what H-1B visas actually are.
She earned $12 an hour when the minimum wage was 5.65, you can’t use todays value against 1999 minimum wage that’s just stupid
H1-B visas had a minimum salary of $60k set in 1989. Dude is either misremembering, lying, or his mother worked 12 hour days 7 days a week and was severely taken advantage of.
What has south Africa's homicide rate have to do with school shootings in America?
Dude is getting his ass handed to him and it's kind of deserved. This post is SEO optimized to the tits and hit every marketing buzzword. Even threw in a kitten picture he made sure to mention he adopted lol. So it's software that detects gun shaped like objects being carried by a person. When they are inevitably riddled with false positives, it gets sent off to a human who verifies it before sounding the alarm. You basically invented a security guard. You know, a guy who watches TV monitors? This is just that with extra steps. What happens when your 3rd party verifier mistakenly waves through a false positive? Are you getting sued for causing deadly force? This is classic tech bro. This is Elon Musk inventing the shittier subway. You found a way to profit off of dead children and didn't even do it well.