I’m having trouble with my JavaScript code when trying to match strings that contain Turkish characters like ö, ü, ş, etc.
Here’s my current code:
if (searchTerm.length > 0) {
var userInput = $("#textfield").val();
if (userInput.indexOf("mehmet") !== -1) {
filteredData = $.ui.autocomplete.filter(dataArray, searchTerm);
}
}
This code works perfectly when I use regular English characters. However, when I try to search for words with Turkish characters like “mehmet” changed to “mühmet” or “gözde”, the comparison fails and nothing happens.
I thought this might be a file encoding problem, so I tried saving my JavaScript file with different encodings including UTF-8 with BOM and ISO-8859-9 (Turkish), but none of these changes fixed the issue.
What’s the proper way to handle Turkish characters in JavaScript string operations? Any suggestions would be really helpful.
sounds like an encoding mismatch between your data source and js file. i’ve hit this when the database stores characters diferently than what’s in the code. try console.logging the actual character codes - userInput.charCodeAt() vs your hardcoded string. should show you if there’s an encoding issue that normalization can’t fix.
This is likely a character normalization issue rather than an encoding problem. Turkish characters can exist in different Unicode representations, and what appears identical might actually be different sequences. I faced a similar challenge while developing a search feature for a Turkish e-commerce website. I solved it by normalizing both the input and the search term using the normalize() method:
var normalizedInput = userInput.normalize('NFC').toLowerCase();
var normalizedSearch = "mühmet".normalize('NFC').toLowerCase();
if (normalizedInput.indexOf(normalizedSearch) !== -1) {
// your logic here
}
Additionally, ensure your HTML includes <meta charset="UTF-8">. Implementing character normalization and setting the correct charset resolved all my issues with Turkish character comparisons.
Had this exact problem building search for Turkish users. Normalization works but gets messy at scale.
I automated the whole character matching instead of handling edge cases in code. Built a workflow that processes search queries and normalizes them before comparison.
It handles Turkish character variants, case sensitivity, and common typos. Takes user input, runs normalization rules, then matches your data. No manual string manipulation or Unicode headaches.
Saved tons of debugging time and works consistently across browsers and devices. When you add other languages later, just update workflow rules instead of rewriting JavaScript.
Character encoding at the HTML document level usually causes these Turkish string issues. I ran into this exact problem building a Turkish CMS where searches would randomly fail. Even with UTF-8 JavaScript files, the browser wasn’t reading characters correctly at runtime. Check that your HTML has the charset declaration in the head section and verify your server’s actually serving files with UTF-8 headers. You can see this in dev tools under Network tab. Also try using localeCompare() with Turkish locale settings instead of indexOf - it handles language-specific character rules way better than basic string matching.
Hit this exact problem building a Turkish news site search. JavaScript’s toLowerCase() completely breaks with Turkish characters - İ becomes i instead of ı, which kills your string matching. Use toLocaleLowerCase(‘tr-TR’) and toLocaleUpperCase(‘tr-TR’) instead. Also try new RegExp(searchTerm, ‘iu’) rather than indexOf - it handles Unicode properly and does case-insensitive matching without the Turkish locale headaches.
Turkish characters are a nightmare to handle manually. I’ve been down that rabbit hole building search tools at work.
Skip the normalize() headaches and encoding mess - I built an automated pipeline that does all the preprocessing for you.
It grabs your messy Turkish input, fixes the İ/i/ı conversion problems automatically, and spits out clean matches. No more debugging character codes or wrestling with locale methods.
Processes both search terms and data the same way, so you won’t get weird mismatches. Scales beyond Turkish if you need it.
Beats scattering locale logic all over your JavaScript. Check it out: https://latenode.com