カテゴリー
category_usa

Marking Press

What Is a Marking Press?

A marking press is a device used to apply letters or patterns to various materials and shapes without using ink. Instead, it physically scrapes or chemically discolors the object. Markings made by a marking press are resistant to friction and do not fade easily.

Traditional methods used a needle or mold to apply pressure, but laser printing has emerged as a prominent method for non-contact printing.

Uses of Marking Presses

Marking presses are used for applying production dates, serial numbers, and lot numbers on parts and products to assure quality and enhance control systems. The adoption of laser marking presses has grown due to their ability to print on complex shapes without physical contact, offering detailed printing at higher speeds without requiring special molds.

Principles of Marking Presses

Marking presses are categorized into contact and non-contact types.

1. Contact Marking Presses

Contact marking presses are subdivided into those requiring a typeface and those that use needle-printing.

Impact
Impact marking presses include type set in a holder or dialed, and pressure is applied to make an indentation.

Needle-Printing
Needle-printing uses a marking pin to create dots on the surface, forming an engraving. Precision grinders, another type, shave the surface with a rotating needle. However, due to their manual operation and low processing capacity, they are less common in industrial use.

2. Non-contact Marking Presses

Non-contact marking presses, or laser markers, add contrast through a chemical reaction, like oxidation, when irradiated by a laser.

Other Information on Marking Presses

1. Impact Marking Presses

Impact marking presses operate manually or air-driven, using spring repulsion to impact the material directly. They are simple to use and provide durable markings. Manual models allow adjustable imprinting loads, while air-driven presses offer the convenience of marking with an air source, including portable, handheld models.

2. Laser Marking Presses

Laser marking presses use a laser beam to thermally alter the surface of metals and resins for engraving or marking. Controlled by computers, these presses achieve precise, high-speed engravings without impacting the product. Fiber lasers, known for their small beam spot and excellent quality, are commonly used for their efficiency in cutting, marking, and welding applications. UV lasers, with a high absorption rate and minimal heat damage, are preferred for high-quality printing on sensitive products.

カテゴリー
category_usa

Arc Welding Robot

What Is an Arc Welding Robot?

Arc Welding RobotsArc welding robots are robots that perform arc welding in place of humans.

Compared to other welding robots, arc welding robots are somewhat smaller. Arc welding emits intense light and powerful ultraviolet rays from the arc discharge, so the welding point becomes very hot. Therefore, there is a high risk of burns, even when protected by goggles and work clothes.

Inhalation of fumes from metal vapors is also harmful and is one of the tasks for which robots are in high demand as an alternative. Robots have the advantage of being able to perform welding at high speeds and with stable quality, which also contributes to cost reduction.

Uses of Arc Welding Robots

Arc welding is used to connect metals such as steel to steel, aluminum, and titanium. It is a welding method applicable to almost all metal structures. 

The following are examples of products for which arc welding robots are used:

  • Steel frames and construction machinery
  • Land transportation machinery such as automobiles and railcars
  • Large air transportation machinery such as aircraft
  • Large marine transportation machinery such as ships

Arc welding robots are used in manufacturing plants for the above products. In recent years, the rationalization of welding has been progressing, and the number of cases in which arc welding robots have been introduced is on the rise. Arc welding robots can also be used for TIG welding and mag welding, which are types of arc welding.

Principle of Arc Welding Robots

Arc welding is a welding method that uses electrical discharges into the air. An electric current is passed through the welding rod, which serves as the electrode, and when it comes into contact with the metal to be joined and is slowly pulled apart, an arc discharge occurs. The arc is so hot, reaching temperatures of up to 20,000°C, that the metal is quickly melted and joined.

The robot has six to seven vertically mounted articulated axes for precise motion. Each axis has a specific angle and speed at which it can move, and a qualified operator teaches the robot the welding conditions. The conditions and positioning at this time are important and are sometimes determined while performing actual welding.

The part that actually performs the welding is called the welding torch, which is fixed to the tip of the robot. The torch and contact tip must be selected according to the object to be welded.

Structure of Arc Welding Robots

The structure of an arc welding robot consists of a manipulator, a controller, and a programming pendant.

1. Manipulator

The manipulator consists of a base, motor and end-effector. The welding torch attached to the end-effector can be replaced to accommodate various welding conditions. It also has a multi-axis articulated structure with servo motors

2. Controller

The controller consists of data storage and communication equipment with the manipulator. Welding conditions and other data are stored in the controller.

3. Programming Pendant

The programming pendant is an interface through which a person teaches welding conditions to the robot. Data describing the manipulator’s operating procedures can be created, changed, or modified. Control parameter changes and teaching are also performed by the programming pendant.

How to Select an Arc Welding Robot

Arc welding robots must be selected according to the welding material, stroke, and installation method. Welding materials include steel and aluminum. Select a robot that corresponds to the material to be welded.

The stroke is the distance the robot can extend its arm. The longer the stroke, the farther the robot can work, but the more expensive it is. When welding large parts, multiple robots may be installed.

Installation methods include wall-hanging and ceiling-suspension, depending on the conditions in which the robot is to be installed. Select an installation method suitable for the location where the robot is to be installed.

Other Information on Arc Welding Robots

Arc Welding Robot Market

The global arc welding robot market is expected to reach US$11.7 billion by 2026. In addition, the automotive industry is expected to remain strong and demand will continue to grow beyond 2024. The prevalence of automation, especially in developed countries, and labor shortage issues are also factors driving demand.

Arc welding robots are generally sold at prices starting at around several million yen. The amount varies depending on the materials to be welded and the conditions of use.

カテゴリー
category_usa

Force Sensor

What Is a Force Sensor?

A Force Sensor is a sensor that measures the magnitude of a force or moment.

It is used to reproduce the human sense of touch by detecting the amount and direction of physical force. Its main application is in robotics.

Force Sensors are basically 6-axis type Force Sensors, which means they can detect forces in three dimensions (X, Y, and Z) and moments around each of these axes (X, Y, and Z).

Applications of Force Sensors

Force Sensors are used in industrial robots to automate tasks that previously had to be performed manually.

1.Detection of External and Reaction Forces

Force Sensors can simultaneously measure forces and moments. Installed at the work end of a robot, it can work with appropriate force while measuring external and reaction forces. 

2. Automation of Precision Work

Force Sensors can measure accurate forces and moments so that robots can work with appropriate forces. This enables automation of precision work by robots.

Specific tasks include insertion of electronic components and connectors with soft terminals, mating with little play, precision screw tightening, deburring, polishing with subtle force, picking work, and independent control of a biped robot.

3. Tactile Diagnosis and Remote Medical Treatment

A terminal equipped with a Force Sensor can be positioned beside a patient, allowing a doctor to access and interpret the forces and moments detected by the Force Sensor, which enables remote tactile diagnosis.

Principle of Force Sensor

Force Sensors detect the amount of deformation caused by a force and convert it into a force or moment.

Strain gage, piezoelectric, optical, and capacitive force sensors are representative among the detection methods of Force Sensors.

1. Strain Gage Type Force Sensor

The strain gage type is a method of converting force or torque by utilizing the property of metal resistive materials whose electrical resistance changes according to the tensile or compressive force applied to the sensor part. This method is widely used for force sensors because it is compact, highly accurate, and responsive. 

2. Piezoelectric Force Sensor

Piezoelectric Force Sensors use piezoelectric materials such as quartz crystal or PZT (lead zirconate titanate) for the sensor part to measure force. They are compact, highly responsive, and relatively inexpensive. However, the accuracy is not as high as strain gage type or capacitance type. 

3. Capacitive Force Sensor

The structure of the capacitive type sensor involves electrodes made of metallic materials arranged facing each other in a capacitor-like configuration. This method detects changes in capacitance due to changes in distance caused by strain between conductors due to force.

The capacitance type is characterized by its relatively simple configuration and low cost. If the electrodes are made of film, they can be made smaller and thinner. Accuracy and response are also excellent. 

4. Optical Force Sensor

In the optical method, a pattern is marked on the object to be measured at regular intervals. The change in the pattern that occurs when a force is applied is detected by an optical sensor such as a camera or laser, and the magnitude of the force is calculated and obtained.

The greatest advantage of the optical method is that it enables non-contact measurement. On the other hand, accuracy, responsiveness, miniaturization, and cost are inferior to other methods. It is limited to special applications that require non-contact measurement.

5. HDR Force Sensor

There are what are called HDR (High Dynamic Range) Force Sensors, which are characterized by a wide dynamic range, e.g., from 10 g to 20 kg.

By combining AI and robot technology with HDR Force Sensors, fine assembly operations can be performed while adjusting for minute forces. Robots are increasingly automating and upgrading assembly work at production sites.

6. Capacitive Force Sensor

The capacitive Force Sensor is characterized by its ability to measure 6-axis components by detecting changes in the distance between two parallel plates. A simple structure can be realized and the price can be kept low.

Force Sensors with an anti-overload stopper mechanism inside the sensor are also available. Recently, these sensors are widely used in the industrial robot field. Demand is expected to grow more and more due to the increasing automation in the manufacturing industry.

Other Information on Force Sensors

Application of Force Sensors

Cooperative work between humans and robots can be realized by having humans operate robots that use Force Sensors. Fine work that requires minute force adjustment can also be performed.

In particular, in the manufacturing field, the use of Force Sensors has realized automation of tasks that can only be performed by skilled craftsmen, thereby improving productivity. In the medical field, Force Sensors are expected to be used in remote medical examinations where the condition of the affected area can be determined by tactile diagnosis.

カテゴリー
category_usa

Height Gauge

What Is a Height Gauge?

A height gauge is a measuring instrument used to measure the distance in the direction of height from a certain reference plane for machined parts.

Height gauges not only measure the height of the object to be measured, but can also be used for marking. The base point is the horizontal surface on which the height gauge is placed, such as on a surface plate, and the height from this reference point is measured.

When measuring, a sub-measure called a vernier is used to precisely measure the height. Height gauges also use a measuring tool called a scriber. The scriber is made of hard material and has a pointed tip, so it can be used to mark a line parallel to the surface plate on the object to be measured at a precise height.

Uses of Height Gauges

Height gauges are mainly used to check the manufacturing quality of processed metal products and in product development. For example, height gauges are used to check whether the height of a fabricated metal product is within the drawing specifications.

Height gauges can be used to measure the height from a flat surface accurately, such as a surface plate. The height can be measured in increments of 0.01 mm by using a vernier scale to read the scale. It features easy operation and precise height measurement, and can be used in a wide range of situations, from the measurement room to the lineside.

Height gauges are also made of a hard, sharp material at the tip, which can be used to scrape lines in the direction of height. It is important to perform the staking by securing the slider with a set screw firmly tightened to prevent the tip from moving.

Principle of Height Gauges

A height gauge consists of a main body base, a main scale with a scale on it, a column on which the main scale is mounted, a vernier that takes minute readings, a slider that is moved up and down for height measurement, and a scriber that serves as the measuring tool.

The height gauge is a measuring instrument that is placed on a surface plate together with the object to be measured. In measurement, the slider is lowered from the top and the bottom of the scriber is brought into contact with the object to be measured. This height is the measurement value. The reading is taken at the point where the main scale and vernier scale overlap.

For accurate measurement, it is important not to apply more measuring force than necessary to the scriber and to read the scale from the front. Also, the scriber must be parallel to the bottom of the base. If the scriber is not adequately secured, or if the flatness of the base plate or other surface is not ensured, stable measurement will not be possible.

After many years of use, the height gauge may tilt over the column due to aging or other factors. Attach a lever-type dial gauge or the like to the point where the scriber is mounted, and while holding it against the side of a straightedge or the like, move the slider part up and down to see how the value changes. If the pillar is tilted, it needs to be adjusted or repaired.

Other Information on Height Gauges

1. Error Factors of Height Gauges

Height gauge measurements can be subject to measurement error due to a variety of reasons. For example, excessive measuring force, thermal effects due to the temperature difference between the object to be measured and the measuring instrument, and parallax effects due to the angle at which the scale is read. It is particularly important to note that errors due to the structure of the measuring instrument are unavoidable.

The main sources of error arising from the structure of the measuring instrument are those arising from the bending of the column and the tilt of the scriber. Scriber inclination is unavoidable due to its measurement method and construction. Because the height gauge has a scriber extending from a slider mounted on the column, not only does the scriber tilt when it is mounted, but over time, gaps and rattles in the parts can cause the scriber to tilt.

Deflection due to the weight of the scriber and mounting parts can also cause the scriber to tilt. A certain amount of these structural errors occur even in new products. If the error is smaller than the resolution of the measuring instrument, there is little need to be concerned, but if the error becomes larger over time, caution is required.

Therefore, in addition to daily inspections, it is essential to perform periodic management such as calibration at a certified calibration laboratory. 

2. Precautions for Using Height Gauges

The bottom surfaces of the main scale and base must be cleaned before and after use to prevent scratches, rust, and oil from deteriorating the sliding action. Some manufacturers and products specify the parallelism between the measuring surface of the scriber and the base bottom surface of the height gauge. Dust, chips, etc., on the surface plate during use or in storage can cause scratches or chipping on the base bottom surface, resulting in deterioration of parallelism.

Storage in locations subject to rapid temperature changes is also undesirable. Repeated expansion and contraction due to thermal effects will not only deteriorate the accuracy, but also cause deformation of the measuring instrument itself.

If the instrument is located near a window or wall with inadequate insulation, it will be subject to thermal effects from temperature differences. Even if the instrument is located indoors and not exposed to direct sunlight, it is still a good idea to be cautious.

カテゴリー
category_usa

In-Circuit Board Tester

What Is an In-Circuit Board Tester?

In-Circuit Board Tester is a testing device used to evaluate the electrical characteristics of individual electronic components mounted on an electronic circuit board inside an electronic device.

For an electronic device to function properly, the internal electronic circuit board must operate properly. In-Circuit Board Tester inspects the board with the electronic components mounted on it.

In-Circuit Board Testers can test the electrical characteristics of individual components mounted on the board with a very small amount of power. They can locate defective parts without damaging the board, and can reliably detect defective parts that are difficult to detect with the naked eye.

In-Circuit Board Tester Applications

In-Circuit Board Testers are widely used in the inspection process of development and mass production lines in factories that handle electronic circuit boards with electronic devices and components. There are two types of In-Circuit Board Testers: press-type In-Circuit Board Testers and Flying Probe Testers.

Press-type In-Circuit Board Testers are capable of high-speed inspection and are suitable for mass-production boards. They are also used for inspection jigs. Flying Probe Testers do not require inspection jigs and are suitable for low-volume, high-mix boards, while handling fine patterns.

Specific inspection items include short/open defects of solder mounting components, defects due to wrong constants of capacitors, coils, and resistors, defects of missing components such as capacitors, coils, resistors, diodes, and transistors, and lead float defects of ICs and connectors, photocouplers and digital transistors. The test is also used to check the operation of photocouplers, digital transistors, and Zener diodes.

Special tests include image inspection of electrically inaccessible components, adhesion (solder) defect inspection of SOPs, QFPs, etc., and simple function tests.

In-Circuit Board Tester Principle

In-Circuit Board Tester extracts defects such as component constants and functions, as well as open or short signal lines including internal vias, by applying a probe to the required location on the electronic board and causing a very small electrical signal to be applied separately from the bias during normal operation.

The system has the internal configuration necessary to smoothly perform various types of inspections, and usually consists of a measurement section for electrical inspection, a scanner section for capturing and recognizing measurement lines, a probing section for bringing the measurement lines into energized contact with specific locations on the board to be inspected, and a control section for controlling these sections. The scanner is used to capture and recognize the measurement line.

The constant of an electronic component is measured from the values of voltage and current when a measurement signal is applied to the probing unit. Since electric circuits generally form a network, it is difficult to measure the constants of individual elements. However, many In-Circuit Board Testers are equipped with various functions to improve inspection accuracy.

Other Information on In-Circuit Board Testers

1. In-Circuit Board Tester Functions

Guarding function
This function electrically isolates the effect of errors caused by current due to parallel connections. 

Phase Separation
When an AC signal is applied to a circuit network composed of resistors, inductors, and capacitors, a phase difference is generated between the current and voltage. This phase difference can be used to accurately measure the constants of each element.

2. Flying Checker

A flying checker is one type of In-Circuit Board Tester, which is an inspection device that primarily applies a probe to determine open-shorts of mounted components on a board. It takes longer to inspect than a standard press-type In-Circuit Board Tester, but is used when the emphasis is on the fact that it does not require preparation of a program or pin board.

Also called a flying probe checker, it creates a net list from Gerber data and uses that data to inspect for broken wires by applying a probe to the beginning and end of the board. It then inspects for shorts by applying a probe between one of its nets and an adjacent net.

Many types of flying checkers are modified from bare board checkers and can be used as general in-circuit tests after mounting. The specific inspection method involves clamping the board from both sides with two or four probes. This testing equipment inspects the open and short-circuit conditions of a printed circuit board. There are two types of testing equipment, one that checks electrical continuity and the other that measures C-capacitance to find short circuits, with the C-capacitance method generally taking less time.

3. Function Tester

A Function Tester serves a distinct purpose compared to the In-Circuit Board Tester, despite often being compared in the context of inspecting similar boards during the board manufacturing process. While the primary role of the In-Circuit Board Tester is to examine the board’s assembly condition, including components and circuit continuity (such as identifying open or short circuits), the Function Tester is designed to verify whether the circuit’s functionality, such as input and output operations, is operating correctly.

This test is usually called a function test, in which electrical signals specified in the specifications are applied to the input terminals of the board to be tested. The purpose is to verify whether the board functions correctly by checking if the specified output signals are generated in accordance with the specifications. Other tests are also performed for components such as switches and LEDs that are difficult to check with just an open-short test using In-Circuit Board Tester, as well as for integrated circuit operations and software writing for MCUs and various types of ICs.

In general, when comparing in-circuit testing and function testing, function testing is more important from the perspective of confirming product operation, and most products prioritize function testing.

カテゴリー
category_usa

Coordinate Measuring Machinery (CMM)

What Is a Coordinate Measuring Machine (CMM)?

Coordinate Measuring Machinery (CMM)A coordinate measuring machine (CMM) is an instrument capable of measuring surface features at the submicron level(smaller than 1/1,000 of a millimeter). 3D coordinate measuring machines can capture the shape of a part in three dimensions and perform a variety of measurements.

They are also used to measure the surface roughness, height, and thickness of electronic component substrates and semiconductors. They are characterized by high speed, high resolution, and high accuracy.

There are also various types of coordinate measuring machinery depending on the installation and measurement methods. There are stationary and portable types in terms of installation method, contact-and non-contact types, laser tracker, layout machine, etc. in terms of measurement method.

Uses of Coordinate Measuring Machinery (CMM)

Applications of coordinate measuring machinery are as follows:

1. Line Roughness Measurement

Coordinate measuring machinery can measure typical surface roughness parameters such as Ra, Rz, etc., as well as a stylus-type surface roughness.

2. Surface Roughness Measurement

Coordinate measuring machines can measure waviness and steps between surfaces with high accuracy by measuring the entire surface. Examples include washer waviness evaluation and block gauge step measurement.

3. Plane Measurement

Coordinate measuring machines are used to measure the distance between two points: straight lines, circular centers, and various other flat surfaces. They are used in all industries, including the medical device, archaeology, molding, and watch industries.

Principle of Coordinate Measuring Machinery 

Most coordinate measuring machinery (CMM) uses white light interferometry, a measurement method that uses a white light interferometer. Light interference is a phenomenon that occurs when there is a difference in the phase of light from two sources. Optical interferometers use this phenomenon to measure the state of surface irregularities, for example.

The interference of light causes a stripe pattern to appear due to the optical path difference generated by the unevenness of the sample surface. The number of stripes indicates the height of the unevenness of the sample surface. In actual use, an objective lens with a built-in reference mirror, called an interference lens, is used. White light is irradiated onto the reference mirror and objective lens, and the interference signal is observed by a camera while the objective lens is moved up and down.

Some models are also equipped with a high-sensitivity CMOS, a semiconductor that converts light entering through the lens into electrical signals. A solid-state imaging device using CMOS can capture an external image at the same time as the shape, allowing surface observation and measurement at the same time. The analysis contents are converted into data, such as a 3D model, which can be viewed on a CAD system.

More Information on the Coordinate Measuring Machinery 

1. 3D Coordinate Measuring Machinery (CMM) Functions

Coordinate measuring machines (CMM) available on the market today use the latest technology and can easily perform measurements that were impossible in the past. The 3D coordinates of a specific point from a virtual origin are considered difficult to measure with common measuring instruments, such as calipers and micrometers.

Also, measurements using virtual points and lines and geometric tolerances are also extremely difficult to conduct with other measuring instruments, but coordinate measuring machinery (CMM) can perform them. Recently, it has become possible to read the shape of a prototype in 3D and create a 3D object using a 3D printer to check the shape in the same way as the actual product.

2. Issues and Solutions for Coordinate Measuring Machinery (CMM)

The efficiency of measurement work has been dramatically improved by the highly accurate measurement technology of coordinate measuring machinery (CMM) and the increased processing speed of measurement data, but there are also the following issues:

  • High cost of installation
  • Large installation space and high maintenance requirements
  • The size of the CMM itself is limited, which in turn limits the size of the objects that can be measured.

CMM with an articulated arm have emerged as a solution to these problems. Originally developed for manufacturers of prosthetic arms and legs, the technology is now used in transportable CMM.

The ability to move the arm at the will of the operator has further expanded the range of measurements that can be taken. The introduction of non-contact measurement using lasers has also made it possible to measure large objects.

カテゴリー
category_usa

Humidity Sensor

What Is a Humidity Sensor?

Humidity SensorsA humidity sensor is a sensor that measures humidity in the air.

Generally, it refers to a sensor that measures the relative humidity of the saturated water vapor content. It is sometimes used in combination with a temperature sensor in the form of a temperature/humidity sensor. Of course, there are also humidity sensors that detect absolute humidity, but humidity sensors that detect relative humidity are more common.

Humidity sensors are used not only in home appliances such as air conditioners and dryers but also in machine maintenance and food processing.

Uses of Humidity Sensors

Humidity Sensors are widely used in home appliances, office automation equipment such as printers, air conditioning in homes, buildings, and facilities, and industrial facilities such as factories and warehouses.

Examples of each application are as follows:

1. General Household Products

Humidity sensors are installed in products for general household use. For example, they are used in air conditioners, refrigerators, automobiles, dryers, air purifiers, humidifiers, etc. Humidity sensors are indispensable for air conditioning products that regulate the air environment. 

2. Office Automation Equipment

Humidity sensors are also used in office automation equipment such as printers, etc. Since office automation equipment does not tolerate extreme dryness or humidity, humidity sensors are used to measure the external environment and prevent equipment malfunctions.

3. Industrial Applications

Humidity sensors are also widely used in industrial applications. They are used to control humidity in food processing plants and plant cultivation plants, as well as in semiconductor and other manufacturing sites and storage areas. Humidity sensors are also used in places where humidity control is important, such as in the manufacturing and operating environments of medical equipment and in the aerospace industry.

4. Storage Applications

Humidity control is also very important in the storage of exhibits in museums and art galleries. Therefore, humidity sensors play an important role in controlling humidity in storage locations.

Principle of Humidity Sensors

Humidity sensors are built to measure relative humidity. A humidity sensor measures the humidity in the air and calculates it as a relative value to the saturated humidity at the temperature in the environment to derive the relative humidity.

Humidity sensors meant to measure absolute humidity, on the other hand, measure the amount of water vapor per cubic meter in the space. This absolute humidity is independent of temperature and indicates the amount of water vapor in the space, and is also called volumetric absolute humidity.

Types of Humidity Sensors

Electronic polymer humidity sensors are the most common type of humidity sensor and are further classified into “resistance change type” and “capacitance change type.” Both types consist of an electrode and a polymer membrane, and changes in humidity due to moisture absorption by the polymer membrane are extracted as changes in electrical signals between the electrodes.

1. Resistance-Type Humidity Sensor

A resistance-type humidity sensor detects electrical signals corresponding to changes in humidity by capturing changes in electrical resistance. It has a structure in which electrodes in the shape of a comb are arranged so that they face each other, and a polymer membrane is arranged to fill the gap between the electrodes facing each other in the shape of a comb.

When the polymer membrane absorbs moisture and water is adsorbed, the ions in the membrane can move freely, and the resistance of the membrane changes due to these ions. This change in membrane resistance causes a change in resistance (impedance) between the electrodes, so humidity can be detected by the change in electrical resistance.

Humidity sensors of the electrical resistance change type have a simple structure and can be mass-produced. They are also relatively inexpensive and durable, and since they measure electrical resistance, they have the advantages of being resistant to noise and suitable for areas with high humidity. However, it has the disadvantage that detection does not work well when humidity is low.

2. Capacitance Change Type Humidity Sensors

The “capacitance change type” humidity sensor applies capacitor technology to detect electrical signals corresponding to changes in humidity by capturing the electrical capacitance. It consists of a moisture-permeable electrode placed between a dielectric of a polymer film such as cellulose or PVA, which absorbs moisture, and an electrode on top of a regular electrode.

On the moisture-transmitting electrode side, moisture in the air is absorbed by the polymer membrane regardless of the presence or absence of an electrode, and the capacitance of the polymer membrane dielectric changes according to the amount of moisture absorbed. As a result, differences in moisture content, or changes in humidity, can be detected as changes in capacitance.

The advantage of the capacitance change type Humidity Sensor is that it is more sensitive and has a faster response time than the resistive type. However, capacitance change Humidity Sensors have the disadvantage of complicated circuits.

Other Information on Humidity Sensors

1. Types of Humidity Sensors

There are two types of humidity sensors: one is in the form of a small element that is connected to an electronic circuit, and the other has a humidity sensor placed inside a probe-shaped measuring section. Some types are resistant to condensation, while others are weak against condensation, so it is necessary to select the appropriate type according to the application. 

2. Life of Humidity Sensor

Humidity sensors gradually deteriorate over many years of continuous use, and naturally, their measurement accuracy declines. In addition, the joint between the humidity sensor and the external output will also deteriorate. Considering these factors, the life span of a sensor is about 2 to 5 years, depending on the operating environment and the type of sensor installed. 

3. Humidity Sensor for Smartphones

In recent years, an increasing number of smartphones have been equipped with temperature and humidity sensors. In order to measure temperature and humidity with such smartphones equipped with temperature/humidity sensors, it is necessary to download a free application or other software.

In many cases, smartphones that do not have a temperature/humidity sensor can be used to measure temperature and humidity by attaching an external sensor.

Wireless sensors that work with smartphones include temperature and humidity sensors with Bluetooth functionality. Such external sensors are generally called “environmental sensors.” Many of these environmental sensors have multiple sensor functions such as temperature, illumination, air pressure, noise, etc., in addition to a humidity sensor.

カテゴリー
category_usa

High Frequency Welder

What Is a High Frequency Welder?

High Frequency Welders

A high frequency welder is a device that welds materials using high frequency dielectric heating.

It is suitable for thermoplastic resins like polyvinyl chloride (PVC) and nylon. Welding with a high frequency welder offers higher weld strength and a cleaner finish compared to other external heating methods.

Unlike microwave heating, a high frequency welder can achieve targeted and deeper heating by using electrode plates to heat the material between them.

Uses of High Frequency Welders

High frequency welders are utilized to bond sheet materials, including:

  • Tents and life jackets.
  • Business card cases and book covers.
  • Bags made of artificial leather.
  • Exterior packaging for seasonings and toothpaste.

Due to its operating principle, the range of materials compatible with a high frequency welder is limited. However, the seamlessness of the welds results in a visually appealing finish, making it ideal for aesthetically sensitive products and ensuring airtightness and waterproofness for items like tents.

Principle of High Frequency Welders

A high frequency welder generates heat through dielectric heating, where a high-frequency voltage agitates the material’s molecules to create frictional heat, uniformly warming the material from inside. This method is effective only for dielectric materials such as vinyl chloride and polyethylene.

By applying high frequency voltage and pressure between electrodes, the material is heated to 248-266°F within seconds, reaching a semi-liquid phase. Cooling under pressure then allows the materials to fuse together.

Other Information on High Frequency Welders

1. The High Frequency Welder Process

This process is notable for not causing appearance defects like scorching, as it heats the weld area locally without affecting the surroundings. The process is smoke-free and does not emit harmful substances, offering a safe and environmentally friendly solution.

2. Materials to Be Welded With High Frequency Welder

Main materials include:

  • PVC (Vinyl Chloride): Commonly used for vinyl sheets and artificial leather due to its softness and thermal processing ease.
  • TPU (Thermoplastic Polyurethane): Chosen for its elasticity in items like hoses and smartphone case bumpers.
  • POF (Polyolefin): Includes polyethylene and polypropylene, used for packaging bags for condiments and toothpaste.

3. High Frequency Welder Frequencies

Selection of power and frequency depends on the material and thickness of the object. Frequencies range from 40 MHz to 200 MHz for plastic sheets, and 10 MHz to 50 MHz for thermosetting plastics, with 2.45 GHz also being used for specific applications.

4. Price of High Frequency Welders

Prices vary widely based on size and output capacity, with smaller units starting around 3,000,000 yen and larger equipment for industrial use ranging from 5,000,000 to 10,000,000 yen.

カテゴリー
category_usa

Ultrasonic Welder

What Is an Ultrasonic Welder?

An Ultrasonic Welder is a mechanical device that uses the heat generated by friction between objects to weld them together.

It is mainly used for joining plastic materials and dissimilar metals.

Because welding is performed by frictional heat, it consumes less power than other heating-based welding methods. It offers easy automation and ensures high reproducibility, as well as a good appearance after welding because no adhesives are used.

Developed in the 1960s, Ultrasonic Welder has been in use for over 50 years. It typically consists of an oscillator and welding table or a transducer and horn. By applying simultaneous ultrasonic vibration and pressure, this device can quickly melt and bond resin and metal materials. Ultrasonic welding machines find extensive applications across various industries. Key features of this welding machine are that it does not use any adhesives, so the appearance after welding is beautiful, it is environmentally friendly because it uses instantaneous frictional heat to weld, so it consumes less power, and its easy to automate, so it is highly reproducible.

Uses of Ultrasonic Welder

The main applications of Ultrasonic Welders are as follows:

  • Joining terminals and wiring.
  • Joining plastic products.
  • Joining dissimilar metals such as aluminum and copper materials.

Ultrasonic welding can be used to bond metals as well as plastics. Its also used to bond metal terminals and wiring, and metal wires inside IC chips.

Principle of Ultrasonic Welder

Ultrasonic Welder is a device in which a transducer, called a horn, applies a certain amount of pressure between the materials to be bonded. The vibration of the horn transfers energy to the bonded surfaces for welding.

The frictional heat generated between the surfaces of the bonded objects makes welding possible. Especially when the object to be welded is metal, the ultrasonic vibration causes the metal surfaces to rub against each other, destroying the oxide film on the surfaces and providing bonding strength.

Ultrasonic Welders consist of an oscillator and a transducer. The oscillator is a device that generates ultrasonic vibrations and is designed to have a constant amplitude so that the amplitude does not change depending on the type of material to be welded. By keeping the amplitude constant, the quality of the product after welding can be ensured. The transducer consists of a Langevin transducer (commonly known as a BL transducer) and a horn member that transmits the vibration. Ultrasonic Welders perform welding by propagating ultrasonic waves from the horn to the welded product.

Other Information on Ultrasonic Welder

1.Features of Ultrasonic Welder

Ultrasonic Welders tend to melt the heated material if the welding time is too long, while the resin tends to carbonize if the welding time is too long. Also, the pressure generated when the object to be welded is held by the horn reduces the welding time if the pressure is high, but too much pressure can also cause the object to stop melting. The important point is to control the three factors, namely, time, pressure, and heat, and ensuring they remain within an appropriate condition range.

Advantages of Ultrasonic Welding include the following:

  • Applicable to virtually all thermoplastics.
  • Continuous seam joining and simultaneous multi-point joining are possible.
  • Low heat storage.
  • Fluxless welding eliminates the need for cleaning processes, and no sparks, flames, or smoke are generated.
  • No toxic substances are emitted during plastic welding.
  • No consumable parts or materials, energy saving and low running cost.
  • Capable of joining dissimilar metals.

While the following points are some disadvantages:

  • Shapes that cannot be sandwiched by the horn, such as miscellaneous or three-dimensional shapes, cannot be joined.
  • High amplitude may result in good weldability, but depending on conditions, scratches or cracks may occur on the resin.
  • High pressure may prevent welding.

2. Ultrasonic Horn

An ultrasonic horn is a component that efficiently transfers vibration energy to the object to be welded. Ultrasonic waves are converted into mechanical vibration amplitude energy by a transducer, and then amplified by a transducer called a booster before being transmitted to the horn. The amplitude is gradually amplified and then optimized at the horn tip. By concentrating the ultrasonic vibration at the horn tip, the object is impacted 40,000 times per second (at 40 kHz).

Ultrasonic horns are available in the following types:

  1. Step type: high amplitude and high stress type.
  2. Catenoidal type: intermediate in amplitude and stress.
  3. Exponential type: low amplitude and low stress.

Ultrasonic horn materials are used according to the purpose of welding, and the following materials are mainly used:

  1. Aluminum alloy
  2. Titanium alloy
  3. Die steel

3. Mask Manufacturing Using Ultrasonic Welder

Ultrasonic Welders are also used in mask manufacturing. Ultrasonic welding machines use ultrasonic vibrations to melt materials and weld them together to produce the twill lines and engravings on masks. This process eliminates the need for threads and adhesives, simplifying the production process.

In addition, the mask can be welded to the mask body with ear strings made of different materials such as natural latex rubber for the rubber part and PE for the thread parts. This ensures that the mask can be manufactured with a single equipment. It is expected that the use of Ultrasonic Welders in mask production will increase in the future.

カテゴリー
category_usa

Measuring Microscope

What Is a Measuring Microscope?

Measuring Microscopes

A measuring microscope is a dimensional measuring instrument that measures dimensions from images magnified by a microscope.

A measuring microscope is a combination of an optical microscope magnified at precise magnification. It also serves as template for comparative measurement, and an XY stage for precise movement of the workpiece on a plane. Measuring microscopes enable non-contact measurement, allows observation of contours and surfaces without damaging the workpiece.

Measuring microscopes generally use telecentric optics in their optical system. In recent years, there are specifications that employ an infinity-corrected optical system in the optical head to enable differential interference observation and simple polarized light observation.

Uses of Measuring Microscopes

Measuring microscopes are used for production and quality control of relatively small mechanical parts, electronic device parts, and semiconductor products. These measuring microscopes are suitable for the measurement of small parts and fine areas that are difficult to measure without microscope magnification.

In addition to dimensional measurement, the microscope can also be used for observation using polarized light and differential interference, for example, to detect flaws in semiconductor substrates. Due to the accuracy of its magnification, it is also useful for simple inspections to determine if a product is within tolerance by performing comparative measurements using a template.

Measuring microscopes can be used as both a measuring instrument and a microscope. Meanwhile, a single unit of measuring microscope can be used for a variety of purposes.

Principle of Measuring Microscope

Measuring microscopes can be classified according to the illumination method.

1. Transillumination

Transillumination is used for dimensional measurement by transmitting light and capturing the shadow of an object as a contour shape. It is used to measure contours. 

2. Vertically Reflected Illumination

Vertically reflected illumination shines light perpendicular to the surface of an object and observes the surface through the reflected light. Vertical reflected illumination can be used not only for dimensional measurement but also for observation of surface shape. 

3. Oblique Reflection Illumination

Oblique reflection illumination is an illumination method that illuminates light at an angle to the surface of the object to be measured. The feature of this method is that the contrast of the image is emphasized, resulting in a three-dimensional and sharp image. However, it is more likely to cause errors in dimensional measurement.

Other Information on Measuring Microscope

1. Telecentric Optics

Most measuring microscopes use telecentric optics for transillumination. Microscopes that do not use telecentric optics will make objects close at hand appear larger and objects farther away appear smaller.

This phenomenon is the same with cameras that we use in our daily lives. However, in dimensional measurement, this characteristic results in distant objects being measured smaller in relation to different parts in the height direction.

With a lens using telecentric optics, the image is blurred, but the size remains the same, even if the focus is shifted in the direction of the distance to the lens and the optical axis. Telecentric optics are indispensable for measuring microscopes, where dimensions are measured while observing with the microscope. 

2. Parallelization of Measuring Microscope

Measuring microscopes are used to measure by placing the object to be measured on the XY stage. Therefore, the measurement point can be anywhere within the operating range of the XY stage. In other words, no matter where the object to be measured is located on the XY stage, the XY stage can be moved to the measurement point.

Some angles or circle diameters to be measured may require a large movement of the XY stage, but the contour of the measurement object is never placed parallel to the movement of the XY stage without special adjustment. Therefore, it is necessary to make the movement of the XY stage and the reference edge of the measurement object parallel before measurement.

If the measurement object and the XY stage are not parallelized, large errors will occur when measuring angles and parallelism. Therefore, calculations are required to compensate for the measurement results. In recent years, manufacturers have a lineup of measurement devices that create a coordinate system on the XY stage and calculate from the coordinates of the origin and measurement points. By using these devices, the man-hours required for parallelization can be reduced. 

3. Field of View of Measuring Microscope

While it is important for a microscope to be able to observe an object under large magnification, it is also important to be able to obtain a wide field of view at a time. The field of view is the area that can be observed at one time using a microscope. The field of view is determined by the diameter of the eyepiece.

The size of the field of view is called the number of fields of view, and the actual field of view represents how much of the surface of the object being measured is visible within the field of view.

The relationship between the actual field of view and lens magnification is as follows:

Actual field of view = Number of fields of view of eyepiece / Magnification of objective lens

As can be seen from the above formula, if the number of fields of view of the eyepiece is the same, the range of the actual field of view becomes narrower as the magnification of the objective lens becomes larger. This indicates that there is a trade-off between increasing the magnification of the objective lens to magnify the object to be measured and the range that can be viewed at one time.

To increase the actual field of view, the diameter of the eyepiece must be increased or the magnification of the objective lens must be decreased. However, there is a limit to reducing the objective lens magnification because of the magnification required for measurement. For this reason, measuring microscopes are equipped with an XY stage and a counter that displays the amount of movement, as well as other devices to measure areas that do not fit into the field of view.